The recent advancements in the research area have primarily focused on enhancing the interpretability and controllability of deep learning models, particularly in scenarios where data imbalance and black-box nature of models pose significant challenges. A notable trend is the use of generative models to address data scarcity in anomaly detection by synthesizing anomaly datasets, with a particular emphasis on disentangling the background from defects to improve the realism and effectiveness of synthetic data. Additionally, there is a growing interest in unraveling the inner workings of neural networks through information-theoretic approaches, such as Sparse Rate Reduction, which has shown promise in improving model generalization. The exploration of neural collapse under imbalanced data conditions has also yielded theoretical insights into the convergence properties of models, contributing to a better understanding of how neural networks behave under varying data distributions. Furthermore, network inversion techniques are being developed to make neural networks more transparent and interpretable, with applications in out-of-distribution detection and training data reconstruction. Lastly, advancements in subspace separability within neural networks, such as the introduction of ESS-ReduNet, have demonstrated significant improvements in convergence speed and classification accuracy, particularly in datasets with complex feature spaces.