Advances in Explainable AI and Interpretable Machine Learning

The field of Explainable AI (XAI) and Interpretable Machine Learning (IML) is rapidly advancing, with a focus on developing techniques to provide insights into the decision-making processes of complex models. Recent research has explored various approaches, including feature attribution methods, concept-based explanations, and model-agnostic interpretability techniques. A key direction in the field is the development of methods that can provide human-understandable explanations, such as visualizations and natural language interpretations. Noteworthy papers in this area include FINCH, which introduces a visual analytics tool for explaining feature interactions in black box models, and TraNCE, which presents a transformative nonlinear concept explainer for CNNs. These advancements have the potential to increase trust and transparency in AI systems, enabling their deployment in high-stakes applications such as healthcare and cybersecurity.

Sources

FINCH: Locally Visualizing Higher-Order Feature Interactions in Black Box Models

Towards Biomarker Discovery for Early Cerebral Palsy Detection: Evaluating Explanations Through Kinematic Perturbations

Interpretable Machine Learning for Oral Lesion Diagnosis through Prototypical Instances Identification

Exploring a Principled Framework for Deep Subspace Clustering

What's Producible May Not Be Reachable: Measuring the Steerability of Generative Models

Unraveling Pedestrian Fatality Patterns: A Comparative Study with Explainable AI

Z-REx: Human-Interpretable GNN Explanations for Real Estate Recommendations

Interpretable Feature Interaction via Statistical Self-supervised Learning on Tabular Data

Self-Explaining Neural Networks for Business Process Monitoring

Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry

Exploring Energy Landscapes for Minimal Counterfactual Explanations: Applications in Cybersecurity and Beyond

Towards Human-Understandable Multi-Dimensional Concept Discovery

Interpretable Generative Models through Post-hoc Concept Bottlenecks

Extracting Interpretable Logic Rules from Graph Neural Networks

SMT-EX: An Explainable Surrogate Modeling Toolbox for Mixed-Variables Design Exploration

Guidelines For The Choice Of The Baseline in XAI Attribution Methods

Geometric Meta-Learning via Coupled Ricci Flow: Unifying Knowledge Representation and Quantum Entanglement

Random feature-based double Vovk-Azoury-Warmuth algorithm for online multi-kernel learning

TraNCE: Transformative Non-linear Concept Explainer for CNNs

Diffusion Counterfactuals for Image Regressors

MindfulLIME: A Stable Solution for Explanations of Machine Learning Models with Enhanced Localization Precision -- A Medical Image Case Study

EXPLICATE: Enhancing Phishing Detection through Explainable AI and LLM-Powered Interpretability

BioX-CPath: Biologically-driven Explainable Diagnostics for Multistain IHC Computational Pathology

Investigating the Duality of Interpretability and Explainability in Machine Learning

Consistent Multigroup Low-Rank Approximation

MASCOTS: Model-Agnostic Symbolic COunterfactual explanations for Time Series

VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow

Learnable cut flow

Built with on top of