Advances in Explainable AI

The field of Explainable AI (XAI) is rapidly evolving, with a focus on developing innovative methods for interpreting and understanding complex machine learning models. Recent research has emphasized the importance of feature attribution, with several studies proposing new techniques for assigning importance scores to input features. These methods aim to provide more accurate and reliable explanations of model decisions, which is crucial for building trust in AI systems. Notably, researchers are exploring ways to improve the efficiency and effectiveness of feature attribution methods, such as using submodular optimization and antithetic sampling. Additionally, there is a growing interest in developing explainable AI frameworks and tools that can be applied to various domains, including biomedical research. Overall, the field is moving towards more robust, scalable, and user-friendly XAI methods. Noteworthy papers include: Less is More: Efficient Black-box Attribution via Minimal Interpretable Subset Selection, which proposes a novel attribution mechanism using submodular subset selection. CFIRE: A General Method for Combining Local Explanations, which introduces a method for combining local explanations to produce faithful and complete global decision rules.

Sources

How to safely discard features based on aggregate SHAP values

Which LIME should I trust? Concepts, Challenges, and Solutions

Less is More: Efficient Black-box Attribution via Minimal Interpretable Subset Selection

CFIRE: A General Method for Combining Local Explanations

xML-workFlow: an end-to-end explainable scikit-learn workflow for rapid biomedical experimentation

shapr: Explaining Machine Learning Models with Conditional Shapley Values in R and Python

Antithetic Sampling for Top-k Shapley Identification

Built with on top of