The field of Explainable AI (XAI) is rapidly evolving, with a focus on developing techniques to provide transparent and trustworthy explanations for artificial intelligence decision-making. Recent research has highlighted the importance of adapting explanations to individual user preferences and needs, rather than relying on a one-size-fits-all approach. Studies have shown that personalized explanations can lead to significant performance gains and improved user trust. Additionally, there is a growing recognition of the need to evaluate the quality of explanations, with a focus on developing metrics that can accurately capture user satisfaction and understanding. Noteworthy papers in this area include:
- Towards Balancing Preference and Performance through Adaptive Personalized Explainability, which presents an adaptive personalization strategy to balance user preference and performance.
- Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities, which analyzes the explanatory qualities that contribute to user satisfaction with counterfactual explanations.