The field of image forgery and manipulation localization is witnessing a significant shift towards more sophisticated and efficient models that integrate multi-scale and multi-perspective approaches. Researchers are increasingly focusing on developing frameworks that not only capture comprehensive forgery clues but also minimize unnecessary feature interference, leading to more accurate and robust detection systems. The use of information-theoretic principles and state space models is gaining traction, offering new ways to model pixel dependencies and global interactions efficiently. Additionally, the integration of Transformers with traditional convolutional neural networks (CNNs) is being explored to leverage the strengths of both architectures, enhancing feature extraction and representation. Notably, there is a growing emphasis on parameter efficiency and computational reduction, with models like SparseViT demonstrating the potential to eliminate handcrafted feature extractors while maintaining or improving performance. These advancements collectively push the boundaries of image forgery localization, making it more resilient to complex and unseen scenarios.