The field of AI-generated image quality assessment is rapidly evolving, with a clear trend towards developing more sophisticated methods that better align with human perception and real-world applications. Recent research has focused on creating comprehensive datasets and benchmarks that not only assess the visual quality of AI-generated images (AIGIs) but also their effectiveness in specific contexts such as advertising and product presentation. Innovative approaches are being introduced to address the limitations of traditional image quality assessment (IQA) methods, including the integration of causal reasoning and multimodal features to improve the interpretability and relevance of quality scores. Additionally, there is a growing emphasis on automating the evaluation process to reduce reliance on manual annotation, thereby enhancing efficiency and scalability. These developments signify a shift towards more holistic and application-oriented quality assessment frameworks that can better support the practical use of AIGIs in various industries.
Noteworthy Papers
- AIGI-VC: Introduces a quality assessment database focusing on the communicability of AIGIs in advertising, offering insights into information clarity and emotional interaction.
- Image Quality Assessment: Investigating Causal Perceptual Effects with Abductive Counterfactual Inference: Proposes an FR-IQA method that leverages abductive counterfactual inference to explore causal relationships between deep network features and perceptual distortions.
- An Evaluation Framework for Product Images Background Inpainting based on Human Feedback and Product Consistency: Develops HFPC, a framework for assessing the quality of generated product images through human feedback and product consistency, significantly reducing manual annotation costs.
- ANID: How Far Are We?: Presents a benchmark for evaluating discrepancies between AI-synthesized and natural images, providing a comprehensive assessment across multiple dimensions.