The field of biomedical natural language processing is rapidly advancing, with a focus on improving the accuracy and efficiency of clinical information extraction, medical symptom coding, and biomedical relation extraction. Recent developments have seen the emergence of large language models (LLMs) as powerful tools for medical information retrieval, with applications in Alzheimer's disease research and biomedical text analysis. These models have shown promising results in zero-shot learning and few-shot learning, reducing the need for extensive dataset annotation and domain expertise. Additionally, the development of domain-specific models, such as Clinical ModernBERT, has improved the performance of biomedical text analysis tasks. Noteworthy papers include: Synthesized Annotation Guidelines are Knowledge-Lite Boosters for Clinical Information Extraction, which proposes a novel method for synthesizing annotation guidelines using LLMs, resulting in significant improvements in clinical named entity recognition benchmarks. Task as Context Prompting for Accurate Medical Symptom Coding Using Large Language Models, which introduces a framework for embedding task-specific context into LLM prompts, demonstrating improved flexibility and accuracy in medical symptom coding tasks. AD-GPT: Large Language Models in Alzheimer's Disease, which presents a domain-specific generative pre-trained transformer for enhancing the retrieval and analysis of AD-related genetic and neurobiological information, showing superior precision and reliability across critical tasks in AD research.