Advances in Out-of-Distribution Detection and Interpretable Reinforcement Learning

The field of machine learning is moving towards developing more robust and transparent models, with a focus on out-of-distribution (OOD) detection and interpretable reinforcement learning. Researchers are exploring new methods to improve the reliability of deep learning models in open-world environments, including the use of pre-trained vision-language models and multimodal representations. Notable papers in this area include SILVA, which introduces an automated framework for semantic interpretability in reinforcement learning, and CQ-DINO, which proposes a category query-based object detection framework for vast vocabulary object detection. Other papers, such as PRO and Enhanced OoD Detection, have made significant contributions to OOD detection, achieving state-of-the-art performance on various benchmarks.

Sources

Towards Automated Semantic Interpretability in Reinforcement Learning via Vision-Language Models

CQ-DINO: Mitigating Gradient Dilution via Category Queries for Vast Vocabulary Object Detection

Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection

Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal Representations

Post-Hoc Calibrated Anomaly Detection

ProHOC: Probabilistic Hierarchical Out-of-Distribution Classification via Multi-Depth Networks

Extremely Simple Out-of-distribution Detection for Audio-visual Generalized Zero-shot Learning

RUNA: Object-level Out-of-Distribution Detection via Regional Uncertainty Alignment of Multimodal Representations

VisTa: Visual-contextual and Text-augmented Zero-shot Object-level OOD Detection

Built with on top of