Explainable AI: Improving Transparency and Trust

The field of Explainable AI (XAI) is rapidly evolving, with a focus on developing techniques to provide transparent and trustworthy explanations for artificial intelligence decision-making. Recent research has highlighted the importance of adapting explanations to individual user preferences and needs, rather than relying on a one-size-fits-all approach. Studies have shown that personalized explanations can lead to significant performance gains and improved user trust. Additionally, there is a growing recognition of the need to evaluate the quality of explanations, with a focus on developing metrics that can accurately capture user satisfaction and understanding. Noteworthy papers in this area include:

  • Towards Balancing Preference and Performance through Adaptive Personalized Explainability, which presents an adaptive personalization strategy to balance user preference and performance.
  • Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities, which analyzes the explanatory qualities that contribute to user satisfaction with counterfactual explanations.

Sources

Towards Balancing Preference and Performance through Adaptive Personalized Explainability

The Effect of Explainable AI-based Decision Support on Human Task Performance: A Meta-Analysis

Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities

The Balancing Act of Policies in Developing Machine Learning Explanations

What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)

Built with on top of