The field of uncertainty quantification and conformal prediction is moving towards more robust and reliable methods for evaluating and mitigating uncertainty in machine learning models. Researchers are focusing on developing innovative approaches to address the challenges of uncertainty quantification, such as bias in evaluation metrics and the need for more efficient and effective methods. Conformal prediction is emerging as a key technique for providing statistical guarantees on the reliability of predictions, with applications in areas such as language models, computer vision, and industrial surface defect detection. Notable papers in this area include:
- SConU, which proposes a novel approach to selective conformal uncertainty that facilitates rigorous management of miscoverage rates and enhances prediction efficiency.
- Data-Driven Calibration of Prediction Sets in Large Vision-Language Models, which addresses the critical challenge of hallucination mitigation in visual question answering tasks through a split conformal prediction framework.