Continual Learning and Machine Unlearning

Current Developments in Continual Learning and Machine Unlearning

The field of continual learning and machine unlearning is witnessing significant advancements, driven by the need to adapt to dynamic data streams and comply with evolving privacy regulations. Recent developments are characterized by innovative methods that address the challenges of catastrophic forgetting, data imbalance, and the efficient handling of large-scale graph data. Here, we summarize the general direction of these advancements and highlight particularly noteworthy contributions.

General Direction

  1. Efficient Continual Graph Learning: There is a growing focus on developing methods that can efficiently learn from sequential graph data while preserving previous knowledge. Techniques such as replay strategies and combined sampling methods are being explored to manage interdependencies between different graph data and enhance efficiency.

  2. Probabilistic Frameworks for Concept Drift: Researchers are proposing probabilistic methods to handle concept drift in data streams. These methods aim to identify and adapt to changes in data relevance over time, ensuring that models can effectively learn from both changing and recurring concepts.

  3. Stability-Plasticity Dilemma: The balance between preserving previous knowledge and adapting to new environments is a central challenge. Curriculum learning and adaptive methods are being developed to address this dilemma, particularly in complex multi-agent domains like train scheduling.

  4. Graph Unlearning: With the rise of privacy concerns, graph unlearning techniques are gaining importance. Novel paradigms like community-centric graph unlearning are being introduced to efficiently eliminate the effects of specific data on graph neural networks.

  5. Unsupervised and Data-Free Learning: There is a shift towards unsupervised and data-free learning scenarios, especially in class incremental learning. Methods that can capture comprehensive feature representations and discover unknown classes without labeled data are being explored.

  6. Imbalance Rectification: Addressing data imbalance in continual learning is becoming crucial. Analytic imbalance rectifier algorithms and re-weighting modules are being developed to balance the contribution of each category to the overall loss.

  7. Model Growth and Continual Learning: The issue of growth-induced forgetting in continual learning is being addressed through data-driven sparse layer expansion and on-data initialization, ensuring adaptability and knowledge retention.

  8. Multi-Label and Multi-View Learning: Techniques for rebalancing multi-label class-incremental learning and handling incomplete multi-view data are being developed to improve performance in real-world scenarios.

Noteworthy Contributions

  • E-CGL: An Efficient Continual Graph Learner: This method effectively manages the correlation between different graph data during continual training and enhances efficiency on large graphs.
  • AIR: Analytic Imbalance Rectifier for Continual Learning: AIR introduces an analytic re-weighting module to balance the contribution of each category in data-imbalanced scenarios.
  • SparseGrow: Addressing Growth-Induced Forgetting: SparseGrow employs data-driven sparse layer expansion to control efficient parameter usage and enhance adaptability.
  • A Unified Framework for Continual Learning and Machine Unlearning: This framework jointly tackles both tasks by leveraging controlled knowledge distillation, ensuring efficient learning and effective unlearning.

These advancements highlight the field's progress towards more adaptive, efficient, and privacy-compliant learning systems. Researchers continue to push the boundaries by addressing complex challenges and proposing innovative solutions.

Sources

E-CGL: An Efficient Continual Graph Learner

A Probabilistic Framework for Adapting to Changing and Recurring Concepts in Data Streams

Mitigating the Stability-Plasticity Dilemma in Adaptive Train Scheduling with Curriculum-Driven Continual DQN Expansion

Community-Centric Graph Unlearning

Exploiting Fine-Grained Prototype Distribution for Boosting Unsupervised Class Incremental Learning

AIR: Analytic Imbalance Rectifier for Continual Learning

SparseGrow: Addressing Growth-Induced Forgetting in Task-Agnostic Continual Learning

Data-Free Class Incremental Gesture Recognition via Synthetic Feature Sampling

Towards Aligned Data Removal via Twin Machine Unlearning

On Missing Scores in Evolving Multibiometric Systems

A Unified Framework for Continual Learning and Machine Unlearning

Rebalancing Multi-Label Class-Incremental Learning

AEMLO: AutoEncoder-Guided Multi-Label Oversampling

Evidential Deep Partial Multi-View Classification With Discount Fusion

Learning Unknowns from Unknowns: Diversified Negative Prototypes Generator for Few-Shot Open-Set Recognition

Online Continuous Generalized Category Discovery

Data Augmentation for Continual RL via Adversarial Gradient Episodic Memory