Enhancing Model Robustness Against Noisy Labels

The recent research in the field of machine learning has seen a significant focus on addressing the challenges posed by noisy labels in various applications. A common theme across several studies is the development of innovative methods to identify, filter, and correct noisy labels to improve model performance. Techniques such as distribution-consistency guided multi-modal hashing, collaborative cross learning, and neighbor-guided universal label calibration have been proposed to mitigate the impact of noisy labels in multi-modal retrieval, semantic contamination, and unsupervised visible-infrared person re-identification tasks, respectively. Additionally, approaches like self-taught on-the-fly meta loss rescaling and learning causal transition matrices for instance-dependent label noise aim to dynamically adjust training processes to better handle noisy data. These advancements not only enhance the robustness of models against label noise but also pave the way for more reliable and accurate machine learning applications in real-world scenarios. Notably, the work on distribution-consistency guided multi-modal hashing and collaborative cross learning stand out for their novel approaches to handling noisy labels effectively.

Sources

Distribution-Consistency-Guided Multi-modal Hashing

Combating Semantic Contamination in Learning with Label Noise

Relieving Universal Label Noise for Unsupervised Visible-Infrared Person Re-Identification by Inferring from Neighbors

Label Errors in the Tobacco3482 Dataset

Learning from Noisy Labels via Self-Taught On-the-Fly Meta Loss Rescaling

Learning Causal Transition Matrix for Instance-dependent Label Noise

Denoising Nearest Neighbor Graph via Continuous CRF for Visual Re-ranking without Fine-tuning

Built with on top of