The recent developments in the research area of machine learning fairness and bias mitigation have shown a significant shift towards more nuanced and multimodal approaches. Researchers are increasingly focusing on integrating multiple data modalities to enhance fairness in image classification and other machine learning tasks. This approach aims to mitigate harmful biases that single-modality systems can exacerbate, particularly in underrepresented populations. Additionally, there is a growing emphasis on the ethical deployment of AI systems, with frameworks being developed to critically evaluate text-to-image models and generative AI tools from an interdisciplinary perspective, incorporating art historical analysis and critical prompt engineering. These frameworks aim to reveal embedded biases and contribute to the development of more equitable AI systems. Furthermore, the adoption of fairness toolkits in software development is being explored from a behavioral perspective, with studies investigating the factors influencing their usage. Practical recommendations include improving toolkit usability and integrating bias mitigation processes into routine development workflows. Overall, the field is moving towards more inclusive and responsible AI practices, with a strong focus on ethical considerations and the integration of diverse methodologies to address fairness and bias in machine learning systems.
Towards Multimodal and Ethical AI: Recent Trends in Fairness and Bias Mitigation
Sources
Who's the (Multi-)Fairest of Them \textsc{All}: Rethinking Interpolation-Based Data Augmentation Through the Lens of Multicalibration
A Framework for Critical Evaluation of Text-to-Image Models: Integrating Art Historical Analysis, Artistic Exploration, and Critical Prompt Engineering
Dialogue with the Machine and Dialogue with the Art World: Evaluating Generative AI for Culturally-Situated Creativity