Enhancing Robustness and Fairness in Computer Vision

The recent advancements in the field of computer vision have been marked by a significant focus on enhancing the robustness, fairness, and interpretability of foundation models. A notable trend is the exploration of Conformal Prediction (CP) to ensure safe deployment of vision models in high-stakes applications, with studies demonstrating the suitability of Vision Transformers for conformalization procedures. Another key area of innovation is the development of methods to mitigate biases in visual recognition tasks, with approaches like MAVias and localized counterfactual generation showing promise in detecting and reducing societal biases. Additionally, there is a growing emphasis on open-world modeling and continual learning to address the challenges of fairness and robustness in diverse and dynamic environments. These developments collectively aim to advance the trustworthiness and applicability of computer vision models in real-world scenarios.

Sources

Are foundation models for computer vision good conformal predictors?

Is Self-Supervision Enough? Benchmarking Foundation Models Against End-to-End Training for Mitotic Figure Classification

MAVias: Mitigate any Visual Bias

Pinpoint Counterfactuals: Reducing social bias in foundation models via localized counterfactual generation

Towards Robust and Fair Vision Learning in Open-World Environments

Built with on top of