The field of continual learning and parameter-efficient fine-tuning is rapidly advancing, with a focus on developing methods that enable models to adapt to new tasks and data while minimizing forgetting of previous knowledge. Recent works have proposed various approaches to address the challenges of catastrophic forgetting and interference between model parameters fine-tuned on different tasks. Notable papers include LIFT+, which introduces a lightweight fine-tuning framework to optimize consistent class conditions, and LoRA-Based Continual Learning with Constraints on Critical Parameter Changes, which proposes freezing critical parameter matrices to mitigate forgetting. Additionally, surveys such as Parameter-Efficient Continual Fine-Tuning: A Survey and PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models provide a comprehensive overview of the current state of the field. Other noteworthy papers include Bayesian continual learning and forgetting in neural networks, which introduces a Bayesian framework for updating network parameters according to their uncertainty, and MEGA: Second-Order Gradient Alignment for Catastrophic Forgetting Mitigation in GFSCIL, which proposes a model-agnostic paradigm for graph few-shot class-incremental learning.
Advancements in Continual Learning and Parameter-Efficient Fine-Tuning
Sources
CardioFit: A WebGL-Based Tool for Fast and Efficient Parameterization of Cardiac Action Potential Models to Fit User-Provided Data
Large Language Model Enhanced Particle Swarm Optimization for Hyperparameter Tuning for Deep Learning Models