Explainability and Robustness in Deep Learning for Healthcare

The field of deep learning is moving towards increased emphasis on explainability and robustness, particularly in healthcare applications. Researchers are exploring various methods to enhance the interpretability of deep learning models, such as using Jacobian Maps to capture localized brain volume changes in Alzheimer's disease detection. Others are investigating the use of uncertainty quantification to provide a heuristic measure of the trustworthiness of feature embedding models. Noteworthy papers include: Unlocking Neural Transparency: Jacobian Maps for Explainable AI in Alzheimer's Detection, which introduces a novel pre-model approach to enhance explainability and trustworthiness in AD detection. Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability, which examines interpretability in deep neural networks fine-tuned for fracture detection and evaluates model performance against adversarial attack.

Sources

Unlocking Neural Transparency: Jacobian Maps for Explainable AI in Alzheimer's Detection

Quantifying the uncertainty of model-based synthetic image quality metrics

A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches

Opening the black box of deep learning: Validating the statistical association between explainable artificial intelligence (XAI) and clinical domain knowledge in fundus image-based glaucoma diagnosis

Secure Diagnostics: Adversarial Robustness Meets Clinical Interpretability

A Meaningful Perturbation Metric for Evaluating Explainability Methods

Built with on top of