The field of natural language processing is witnessing a significant surge in research focused on the vulnerability of language models to adversarial attacks. Recent developments indicate a shift towards more sophisticated and efficient attack methods that can bypass traditional defenses. One notable trend is the development of query-free hard black-box attacks, which can generate adversarial examples without requiring access to the target model or its output information. Another area of focus is the creation of backdoor attacks that can evade detection by human annotators, with an emphasis on crafting subtle and effective trigger attributes. These advances highlight the need for more robust and secure language models that can withstand various types of adversarial attacks. Noteworthy papers in this area include Q-FAKER, which proposes a novel query-free hard black-box attack method, and BadApex, which introduces an adaptive optimization mechanism for generating poisoned text. Additionally, Robo-Troj presents a multi-trigger backdoor attack for LLM-based task planners, and The Ultimate Cookbook for Invisible Poison proposes a method for crafting subtle clean-label text backdoors with style attributes. These papers demonstrate significant advancements in the field and highlight the importance of continued research into the security and robustness of language models.