The field of deep learning is moving towards addressing security concerns and privacy risks associated with model training and deployment. Researchers are exploring innovative approaches to detect and prevent backdoor attacks, which can compromise model integrity. Recent studies have proposed unified detection frameworks and novel attack methods that can bypass state-of-the-art defenses. Machine unlearning, which aims to remove the influence of specific data from trained models, is also gaining attention. However, the intersection of machine unlearning and traditional machine learning attacks remains largely unexplored. Noteworthy papers in this area include the proposal of a unified backdoor detection framework that achieves superior detection performance across different learning paradigms, and the introduction of an invisible backdoor attack that can effectively evade detection mechanisms. Additionally, a survey of membership inference attacks on large-scale models highlights the need for systematic studies on privacy risks in modern deep learning architectures.