Advances in Knowledge Distillation and Dataset Distillation

The field of machine learning is moving towards more efficient and effective methods for knowledge transfer and data compression. Recent developments have focused on improving knowledge distillation techniques, which enable the transfer of knowledge from complex models to lightweight counterparts. Additionally, dataset distillation methods have been proposed to compress large datasets into smaller, more efficient representations while preserving critical information for model training. These advancements have the potential to enhance model performance, reduce training times, and improve overall generalization capabilities. Notable papers in this area include CustomKD, which customizes large vision foundation models for edge model improvement via knowledge distillation, and Enhancing Dataset Distillation via Non-Critical Region Refinement, which preserves instance-specific details and fine-grained regions in synthetic data while enriching non-critical regions with class-general information. Other noteworthy papers include Delving Deep into Semantic Relation Distillation, which introduces a novel methodology for semantics-based relation knowledge distillation, and Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation, which employs a curriculum selection framework for efficient high-IPC dataset distillation.

Sources

City2Scene: Improving Acoustic Scene Classification with City Features

Finding Stable Subnetworks at Initialization with Dataset Distillation

Dataset Distillation for Quantum Neural Networks

CustomKD: Customizing Large Vision Foundation for Edge Model Improvement via Knowledge Distillation

Enhancing Dataset Distillation via Non-Critical Region Refinement

Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation

Delving Deep into Semantic Relation Distillation

Built with on top of