Enhancing Model Robustness and Adaptability in Imbalanced and Noisy Data Environments

The recent developments in the research area of machine learning and deep learning have shown a significant shift towards addressing challenges related to data imbalance, label noise, and out-of-distribution (OOD) data. A common theme across several papers is the development of novel algorithms that enhance model robustness and performance under these challenging conditions. Specifically, there is a growing focus on active learning strategies that reduce the reliance on extensive labeled data by intelligently selecting informative samples for annotation. Additionally, advancements in architectural design for neural networks, particularly in long-tailed recognition scenarios, are being explored to better handle imbalanced data distributions. Another notable trend is the integration of multi-modal data and adaptive learning techniques to improve OOD detection and domain generalization. These innovations collectively aim to make machine learning models more adaptable and reliable in real-world applications, where data is often imperfect and imbalanced.

Noteworthy papers include 'FisherMask: Enhancing Neural Network Labeling Efficiency in Image Classification Using Fisher Information,' which introduces a novel active learning approach leveraging Fisher information for parameter selection, and 'LT-DARTS: An Architectural Approach to Enhance Deep Long-Tailed Learning,' which proposes a new search space and classifier to improve performance on long-tailed data. 'Deep Active Learning in the Open World' also stands out for its innovative approach to handling OOD data in open-world scenarios.

Sources

FisherMask: Enhancing Neural Network Labeling Efficiency in Image Classification Using Fisher Information

GCI-ViTAL: Gradual Confidence Improvement with Vision Transformers for Active Learning on Label Noise

LT-DARTS: An Architectural Approach to Enhance Deep Long-Tailed Learning

Deep Active Learning in the Open World

Weak to Strong Learning from Aggregate Labels

Multi-View Majority Vote Learning Algorithms: Direct Minimization of PAC-Bayesian Bounds

Adaptive Conditional Expert Selection Network for Multi-domain Recommendation

Robust Fine-tuning of Zero-shot Models via Variance Reduction

Learning from Limited and Imperfect Data

Feature-Space Semantic Invariance: Enhanced OOD Detection for Open-Set Domain Generalization

Mix from Failure: Confusion-Pairing Mixup for Long-Tailed Recognition

DPU: Dynamic Prototype Updating for Multimodal Out-of-Distribution Detection

Surprisingly Popular Voting for Concentric Rank-Order Models

Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness

A Centralized-Distributed Transfer Model for Cross-Domain Recommendation Based on Multi-Source Heterogeneous Transfer Learning

Long-Tailed Object Detection Pre-training: Dynamic Rebalancing Contrastive Learning with Dual Reconstruction

OOD-SEG: Out-Of-Distribution detection for image SEGmentation with sparse multi-class positive-only annotations

Built with on top of