Advances in Class-Incremental Learning

The field of class-incremental learning is moving towards developing more effective methods for mitigating catastrophic forgetting. Recent research has focused on improving the stability-plasticity balance in training stages and exploring new solutions such as test-time semantic evolution and disentangled manifold learning. Noteworthy papers include RoSE, which proposes a test-time semantic drift compensation framework, and CREATE, which employs a lightweight auto-encoder module to learn compact manifolds for each class. Other innovative approaches include the use of knowledge graphs, feature calibration, and adaptive weighted parameter fusion. These advancements have shown significant improvements in performance and have the potential to enable more accurate and efficient class-incremental learning models.

Sources

Restoring Forgotten Knowledge in Non-Exemplar Class Incremental Learning through Test-Time Semantic Evolution

Specifying What You Know or Not for Multi-Label Class-Incremental Learning

Reducing Class-wise Confusion for Incremental Learning with Disentangled Manifolds

DualCP: Rehearsal-Free Domain-Incremental Learning via Dual-Level Concept Prototype

Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning

Feature Calibration enhanced Parameter Synthesis for CLIP-based Class-incremental Learning

Adaptive Weighted Parameter Fusion with CLIP for Class-Incremental Learning

Beyond Background Shift: Rethinking Instance Replay in Continual Semantic Segmentation

T-CIL: Temperature Scaling using Adversarial Perturbation for Calibration in Class-Incremental Learning

Built with on top of