Advancements in Continual Learning and Parameter-Efficient Fine-Tuning

The field of continual learning and parameter-efficient fine-tuning is rapidly advancing, with a focus on developing methods that enable models to adapt to new tasks and data while minimizing forgetting of previous knowledge. Recent works have proposed various approaches to address the challenges of catastrophic forgetting and interference between model parameters fine-tuned on different tasks. Notable papers include LIFT+, which introduces a lightweight fine-tuning framework to optimize consistent class conditions, and LoRA-Based Continual Learning with Constraints on Critical Parameter Changes, which proposes freezing critical parameter matrices to mitigate forgetting. Additionally, surveys such as Parameter-Efficient Continual Fine-Tuning: A Survey and PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models provide a comprehensive overview of the current state of the field. Other noteworthy papers include Bayesian continual learning and forgetting in neural networks, which introduces a Bayesian framework for updating network parameters according to their uncertainty, and MEGA: Second-Order Gradient Alignment for Catastrophic Forgetting Mitigation in GFSCIL, which proposes a model-agnostic paradigm for graph few-shot class-incremental learning.

Sources

CardioFit: A WebGL-Based Tool for Fast and Efficient Parameterization of Cardiac Action Potential Models to Fit User-Provided Data

LIFT+: Lightweight Fine-Tuning for Long-Tail Learning

LoRA-Based Continual Learning with Constraints on Critical Parameter Changes

Bayesian continual learning and forgetting in neural networks

MEGA: Second-Order Gradient Alignment for Catastrophic Forgetting Mitigation in GFSCIL

Parameter-Efficient Continual Fine-Tuning: A Survey

Lightweight Road Environment Segmentation using Vector Quantization

PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models

Large Language Model Enhanced Particle Swarm Optimization for Hyperparameter Tuning for Deep Learning Models

A computational framework for longitudinal medication adherence prediction in breast cancer survivors: A social cognitive theory based approach

Vision-Centric Representation-Efficient Fine-Tuning for Robust Universal Foreground Segmentation

Mitigating Parameter Interference in Model Merging via Sharpness-Aware Fine-Tuning

Evaluating Temporal Plasticity in Foundation Time Series Models for Incremental Fine-tuning

Semi-parametric Memory Consolidation: Towards Brain-like Deep Continual Learning

Distribution-aware Forgetting Compensation for Exemplar-Free Lifelong Person Re-identification

HyperFlow: Gradient-Free Emulation of Few-Shot Fine-Tuning

PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning

Dynamic Time-aware Continual User Representation Learning

Noise-Tolerant Coreset-Based Class Incremental Continual Learning

Fine-tune Smarter, Not Harder: Parameter-Efficient Fine-Tuning for Geospatial Foundation Models

Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

Built with on top of