The field of large language models (LLMs) is rapidly evolving, with a growing focus on addressing the issue of hallucinations. Hallucinations refer to instances where LLMs generate responses that deviate from user input or training data, undermining user trust and hindering the adoption of generative AI systems. Recent research has made significant progress in detecting and mitigating hallucinations, with a clear taxonomy of hallucinations being proposed to distinguish between extrinsic and intrinsic hallucinations. New benchmarks and evaluation tasks have been introduced to promote consistency and facilitate research, including dynamic test set generation to mitigate data leakage and ensure robustness. Additionally, innovative methods have been developed to improve the performance of LLMs, such as incorporating retrieval and natural language inference models to predict factual consistency between premises and hypotheses. Other notable developments include the use of expert-labeled feedback to train hallucination detectors, which has been shown to dramatically improve the accuracy of hallucination detection. Overall, the field is moving towards developing more robust and reliable LLMs that can generate high-quality responses while minimizing the occurrence of hallucinations. Noteworthy papers include: AI Idea Bench 2025, which presents a framework for evaluating and comparing the ideas generated by LLMs. HalluLens: LLM Hallucination Benchmark, which introduces a comprehensive hallucination benchmark with a clear taxonomy of hallucinations. (Im)possibility of Automated Hallucination Detection in Large Language Models, which provides theoretical support for feedback-based methods in training hallucination detectors.