The recent developments in the research area indicate a strong focus on enhancing the trustworthiness, explainability, and ethical considerations of AI systems. There is a notable shift towards integrating human-centric approaches in AI design, emphasizing the importance of human-AI collaboration and the need for AI systems to complement human capabilities rather than replace them. This trend is evident in the exploration of human-AI interaction models, where trust, communication, and mutual adaptability are highlighted as critical elements. Additionally, the field is witnessing a surge in research on multimodal explainable AI (MXAI), aiming to make AI decision-making processes more transparent and interpretable. The integration of large language models (LLMs) into various AI applications is also a significant advancement, offering new possibilities for explainability and fault diagnosis in complex systems. Furthermore, there is a growing emphasis on responsible AI governance and the development of frameworks that ensure AI systems align with societal values and ethical standards. The research also underscores the importance of considering cultural contexts in AI development, highlighting the need for AI systems to be culturally sensitive and adaptable. Overall, the field is moving towards more inclusive, transparent, and ethically sound AI solutions that prioritize human well-being and societal impact.
Trust, Transparency, and Ethical AI: Emerging Trends
Sources
AI Perceptions Across Cultures: Similarities and Differences in Expectations, Risks, Benefits, Tradeoffs, and Value in Germany and China
Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems
Integrating Evidence into the Design of XAI and AI-based Decision Support Systems: A Means-End Framework for End-users in Construction