The field of large language models (LLMs) is rapidly evolving, with a focus on improving safety and security. Recent studies have highlighted the vulnerability of LLMs to adversarial attacks, including jailbreak attacks and backdoor attacks. Researchers are exploring new methods to detect and mitigate these attacks, such as learning natural language constraints and using visual retrieval-augmented generation. Additionally, there is a growing interest in developing more robust and generalizable LLMs that can adapt to novel safety requirements and domain shifts. Noteworthy papers in this area include the introduction of a domain-based taxonomy of jailbreak vulnerabilities and the proposal of a novel backdoor attack method called Parasite, which leverages steganography for trigger hiding. Another notable work is the development of a training-free adversarial detection method using visual retrieval-augmented generation, which achieves state-of-the-art performance in detecting adversarial patches.