Advances in Uncertainty Quantification and Conformal Prediction

The field of uncertainty quantification and conformal prediction is moving towards more robust and reliable methods for evaluating and mitigating uncertainty in machine learning models. Researchers are focusing on developing innovative approaches to address the challenges of uncertainty quantification, such as bias in evaluation metrics and the need for more efficient and effective methods. Conformal prediction is emerging as a key technique for providing statistical guarantees on the reliability of predictions, with applications in areas such as language models, computer vision, and industrial surface defect detection. Notable papers in this area include:

  • SConU, which proposes a novel approach to selective conformal uncertainty that facilitates rigorous management of miscoverage rates and enhances prediction efficiency.
  • Data-Driven Calibration of Prediction Sets in Large Vision-Language Models, which addresses the critical challenge of hallucination mitigation in visual question answering tasks through a split conformal prediction framework.

Sources

Revisiting Uncertainty Quantification Evaluation in Language Models: Spurious Interactions with Response Length Bias Results

SConU: Selective Conformal Uncertainty in Large Language Models

Automatically Detecting Numerical Instability in Machine Learning Applications via Soft Assertions

Aerial Image Classification in Scarce and Unconstrained Environments via Conformal Prediction

Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction

Conformal Segmentation in Industrial Surface Defect Detection with Statistical Guarantees

Built with on top of