Image Quality Assessment and Restoration

Report on Current Developments in Image Quality Assessment and Restoration

General Trends and Innovations

The recent advancements in the field of image quality assessment (IQA) and image restoration are marked by a shift towards more sophisticated and context-aware models. Researchers are increasingly focusing on developing methods that can handle complex and diverse real-world scenarios, such as adverse weather conditions, mixed distortions, and limited data availability. This trend is driven by the need for robust and scalable solutions in applications like autonomous inspection, autonomous driving, and smart grids.

One of the key innovations is the integration of multi-modal data, such as combining visual and language models, to enhance the understanding and prediction of image quality. This approach leverages the strengths of Vision-Language Models (VLMs) like CLIP, which offer extensive knowledge and generalizability, to identify and quantify various distortions in images. This fusion of modalities allows for more accurate and explainable quality assessments, particularly in blind IQA scenarios where reference images are not available.

Another significant development is the adoption of transformer-based architectures in image restoration tasks. Transformers, known for their ability to capture long-range dependencies and global context, are being adapted to handle specific challenges like memory efficiency and speed. These models are being fine-tuned to reduce computational overhead while maintaining or even improving performance, making them suitable for real-time applications.

The field is also witnessing a rise in the use of generative models, particularly GANs, for image restoration. These models are being enhanced to address specific issues like mode collapse and visual defects in UAV-captured images, leading to more accurate and visually appealing restorations.

Noteworthy Papers

  1. Exploring Rich Subjective Quality Information for Image Quality Assessment in the Wild: This paper introduces a novel IQA method that leverages rich subjective quality information beyond the traditional mean opinion score (MOS), enhancing prediction performance and generalizability.

  2. Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization: A groundbreaking framework that fast adapts CLIP to IQA tasks with limited data, significantly improving accuracy and reducing reliance on extensive human annotations.

  3. AgileIR: Memory-Efficient Group Shifted Windows Attention for Agile Image Restoration: A transformer-based model that reduces memory consumption by over 50% while maintaining high performance, making it ideal for real-time image restoration tasks.

  4. ExIQA: Explainable Image Quality Assessment Using Distortion Attributes: An explainable approach to BIQA that uses distortion attributes and Vision-Language Models to achieve state-of-the-art performance with high generalizability.

These papers represent significant strides in the field, offering innovative solutions to long-standing challenges and paving the way for future advancements in image quality assessment and restoration.

Sources

Power Line Aerial Image Restoration under dverse Weather: Datasets and Baselines

Exploring Rich Subjective Quality Information for Image Quality Assessment in the Wild

Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization

Enhanced Pix2Pix GAN for Visual Defect Removal in UAV-Captured Images

AgileIR: Memory-Efficient Group Shifted Windows Attention for Agile Image Restoration

Multi-Weather Image Restoration via Histogram-Based Transformer Feature Enhancement

ExIQA: Explainable Image Quality Assessment Using Distortion Attributes

Attention Down-Sampling Transformer, Relative Ranking and Self-Consistency for Blind Image Quality Assessment

Foundation Models Boost Low-Level Perceptual Similarity Metrics

Exploring Kolmogorov-Arnold networks for realistic image sharpness assessment