The recent advancements in knowledge distillation have significantly focused on enhancing the transfer of category-level information and addressing the challenges posed by complex datasets. Innovations such as preview-based category contrastive learning and correlation-aware knowledge distillation frameworks have demonstrated substantial improvements in optimizing category representation and exploring distinct correlations between instance and category features. These methods not only contribute to more discriminative category centers but also lead to better classification outcomes. Additionally, the introduction of self-supervised keypoint detection with distilled depth keypoint representation has shown promising results in reducing error rates and improving keypoint accuracy across various datasets. The field is also witnessing a shift towards reducing the necessity of large-scale soft labels in dataset distillation by enhancing within-class diversity and employing class-wise supervision during image synthesis. Furthermore, the emphasis on discriminative features in dataset distillation for complex scenarios and the use of diverse diffusion augmentation in data-free knowledge distillation are paving the way for more practical and effective solutions. These developments collectively signify a robust progression towards more efficient and accurate knowledge distillation techniques.