Advances in Explainable AI and Human-Centered Design

The field of artificial intelligence is shifting towards a more human-centered approach, with a focus on explainability and transparency. Recent developments have highlighted the importance of designing AI systems that are intuitive and trustworthy, particularly in high-stakes applications such as healthcare and weather forecasting. Researchers are exploring innovative methods to visualize and explain complex AI decisions, making it easier for non-experts to understand and interact with these systems. Noteworthy papers in this area include: Briteller, which introduces a tangible and embodied learning experience to help children understand AI recommendations. Immersive Explainability, which presents a virtual reality interface to visualize robot navigation decisions and enhance human-robot interaction. Example-Based Concept Analysis Framework, which provides a user-centric approach to explainable AI in weather forecasting. Explainable AI-Based Interface System, which defines requirements for explanations in meteorology and designs an XAI interface system based on user feedback. Human-Centered Development of an Explainable AI Framework, which co-designs an AI clinical decision support tool with perioperative physicians to predict surgical risk.

Sources

Briteller: Shining a Light on AI Recommendations for Children

Digital Twins in Biopharmaceutical Manufacturing: Review and Perspective on Human-Machine Collaborative Intelligence

Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality

Example-Based Concept Analysis Framework for Deep Weather Forecast Models

Explainable AI-Based Interface System for Weather Forecasting Model

Human-Centered Development of an Explainable AI Framework for Real-Time Surgical Risk Surveillance

Built with on top of