Strategic Overfitting and Pre-trained Model Utilization in Noisy Label Learning

The recent developments in the research area of learning with noisy labels have significantly advanced the field, focusing on innovative methods to handle label noise in various contexts. A notable trend is the exploration of overfitting dynamics as a controllable mechanism for enhancing model performance, particularly in anomaly detection tasks. This shift challenges the conventional view of overfitting as purely detrimental, proposing instead that it can be strategically harnessed to improve model discrimination. Additionally, there is a growing emphasis on leveraging pre-trained vision foundation models for medical image classification under label noise, demonstrating improved robustness and performance through curriculum fine-tuning paradigms. The integration of human-like label noise in testing frameworks is also gaining traction, providing more realistic scenarios for evaluating the robustness of learning with noisy labels methods. Furthermore, the development of novel loss functions and regularization techniques continues to be a key area of innovation, with a focus on enhancing convergence and performance in the presence of noisy labels. Notably, the creation of specialized datasets with diverse real-world noise characteristics is fostering advancements in robust machine learning and label correction methods, particularly for fine-grained classification tasks.

Noteworthy papers include one that introduces a controllable overfitting framework for anomaly detection, effectively transforming overfitting into a tool for model optimization. Another paper proposes a curriculum fine-tuning method for vision foundation models in medical image classification, significantly outperforming previous baselines. Additionally, a study on generating human-like label noise for more realistic testing scenarios highlights the need for more robust evaluation methods in learning with noisy labels.

Sources

Learned Random Label Predictions as a Neural Network Complexity Metric

Selfish Evolution: Making Discoveries in Extreme Label Noise with the Help of Overfitting Dynamics

Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise

Robust Testing for Deep Learning using Human Label Noise

Friend or Foe? Harnessing Controllable Overfitting for Anomaly Detection

NLPrompt: Noise-Label Prompt Learning for Vision-Language Models

Leverage Domain-invariant assumption for regularization

Noisy Ostracods: A Fine-Grained, Imbalanced Real-World Dataset for Benchmarking Robust Machine Learning and Label Correction Methods

Active Negative Loss: A Robust Framework for Learning with Noisy Labels

Built with on top of