Trust, Transparency, and Ethical AI: Emerging Trends

The recent developments in the research area indicate a strong focus on enhancing the trustworthiness, explainability, and ethical considerations of AI systems. There is a notable shift towards integrating human-centric approaches in AI design, emphasizing the importance of human-AI collaboration and the need for AI systems to complement human capabilities rather than replace them. This trend is evident in the exploration of human-AI interaction models, where trust, communication, and mutual adaptability are highlighted as critical elements. Additionally, the field is witnessing a surge in research on multimodal explainable AI (MXAI), aiming to make AI decision-making processes more transparent and interpretable. The integration of large language models (LLMs) into various AI applications is also a significant advancement, offering new possibilities for explainability and fault diagnosis in complex systems. Furthermore, there is a growing emphasis on responsible AI governance and the development of frameworks that ensure AI systems align with societal values and ethical standards. The research also underscores the importance of considering cultural contexts in AI development, highlighting the need for AI systems to be culturally sensitive and adaptable. Overall, the field is moving towards more inclusive, transparent, and ethically sound AI solutions that prioritize human well-being and societal impact.

Sources

Trustworthy and Explainable Decision-Making for Workforce allocation

Human-Centric NLP or AI-Centric Illusion?: A Critical Investigation

Responsible AI Governance: A Response to UN Interim Report on Governing AI for Humanity

Bots against Bias: Critical Next Steps for Human-Robot Interaction

Agnosticism About Artificial Consciousness

What Human-Horse Interactions may Teach us About Effective Human-AI Interactions

Detecting Machine-Generated Music with Explainability -- A Challenge and Early Benchmarks

AI Perceptions Across Cultures: Similarities and Differences in Expectations, Risks, Benefits, Tradeoffs, and Value in Germany and China

Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems

A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future

Towards AI-$45^{\circ}$ Law: A Roadmap to Trustworthy AGI

Integrating Evidence into the Design of XAI and AI-based Decision Support Systems: A Means-End Framework for End-users in Construction

Human-in-the-loop or AI-in-the-loop? Automate or Collaborate?

FaultExplainer: Leveraging Large Language Models for Interpretable Fault Detection and Diagnosis

AI and Cultural Context: An Empirical Investigation of Large Language Models' Performance on Chinese Social Work Professional Standards

Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment

Built with on top of