Advances in Image Restoration and Enhancement

The recent developments in the field of image restoration and enhancement have seen a significant shift towards leveraging advanced neural network architectures and diffusion models. These innovations are particularly focused on addressing complex degradation scenarios, such as atmospheric turbulence, low-light conditions, and composite image degradations. The integration of probabilistic priors and attention mechanisms within diffusion models has shown to be effective in capturing diverse feature variations and reducing artifacts, thereby enhancing spatial coherence in restored images. Additionally, the use of transformers and state space models has enabled more efficient and scalable solutions for tasks like super-resolution and dehazing, with notable improvements in computational efficiency and generalization capabilities. Notably, the field is also witnessing advancements in the handling of specific image modalities, such as infrared and polarized images, where preserving spectral distribution fidelity and leveraging polarization cues are critical for accurate restoration. The noteworthy papers in this area include DiffFNO, which sets a new standard in super-resolution with superior accuracy and computational efficiency, and PPTRN, which significantly improves restoration quality on turbulence-degraded images. These developments collectively push the boundaries of what is possible in image restoration, making it more robust and applicable to a wider range of real-world scenarios.

Sources

DiffFNO: Diffusion Fourier Neural Operator

A Polarization Image Dehazing Method Based on the Principle of Physical Diffusion

NeISF++: Neural Incident Stokes Field for Polarized Inverse Rendering of Conductors and Dielectrics

A Low-Resolution Image is Worth 1x1 Words: Enabling Fine Image Super-Resolution with Transformers and TaylorShift

Probabilistic Prior Driven Attention Mechanism Based on Diffusion Model for Imaging Through Atmospheric Turbulence

DR-BFR: Degradation Representation with Diffusion Models for Blind Face Restoration

AllRestorer: All-in-One Transformer for Image Restoration under Composite Degradations

TSFormer: A Robust Framework for Efficient UHD Image Restoration

MSSIDD: A Benchmark for Multi-Sensor Denoising

Towards Degradation-Robust Reconstruction in Generalizable NeRF

RAWMamba: Unified sRGB-to-RAW De-rendering With State Space Model

$\text{S}^{3}$Mamba: Arbitrary-Scale Super-Resolution via Scaleable State Space Model

Zoomed In, Diffused Out: Towards Local Degradation-Aware Multi-Diffusion for Extreme Image Super-Resolution

Frequency-Aware Guidance for Blind Image Restoration via Diffusion Models

Contourlet Refinement Gate Framework for Thermal Spectrum Distribution Regularized Infrared Image Super-Resolution

Infrared-Assisted Single-Stage Framework for Joint Restoration and Fusion of Visible and Infrared Images under Hazy Conditions

RAW-Diffusion: RGB-Guided Diffusion Models for High-Fidelity RAW Image Generation

HF-Diff: High-Frequency Perceptual Loss and Distribution Matching for One-Step Diffusion-Based Image Super-Resolution

Robust SG-NeRF: Robust Scene Graph Aided Neural Surface Reconstruction

Zero-Shot Low-Light Image Enhancement via Joint Frequency Domain Priors Guided Diffusion

Built with on top of