The recent research in the field of machine learning has seen a significant focus on addressing the challenges posed by noisy labels in various applications. A common theme across several studies is the development of innovative methods to identify, filter, and correct noisy labels to improve model performance. Techniques such as distribution-consistency guided multi-modal hashing, collaborative cross learning, and neighbor-guided universal label calibration have been proposed to mitigate the impact of noisy labels in multi-modal retrieval, semantic contamination, and unsupervised visible-infrared person re-identification tasks, respectively. Additionally, approaches like self-taught on-the-fly meta loss rescaling and learning causal transition matrices for instance-dependent label noise aim to dynamically adjust training processes to better handle noisy data. These advancements not only enhance the robustness of models against label noise but also pave the way for more reliable and accurate machine learning applications in real-world scenarios. Notably, the work on distribution-consistency guided multi-modal hashing and collaborative cross learning stand out for their novel approaches to handling noisy labels effectively.