The field of class-incremental learning is moving towards developing more effective methods for mitigating catastrophic forgetting. Recent research has focused on improving the stability-plasticity balance in training stages and exploring new solutions such as test-time semantic evolution and disentangled manifold learning. Noteworthy papers include RoSE, which proposes a test-time semantic drift compensation framework, and CREATE, which employs a lightweight auto-encoder module to learn compact manifolds for each class. Other innovative approaches include the use of knowledge graphs, feature calibration, and adaptive weighted parameter fusion. These advancements have shown significant improvements in performance and have the potential to enable more accurate and efficient class-incremental learning models.