The recent advancements in the field of large language models (LLMs) have primarily focused on addressing the issue of hallucinations, which are ungrounded or factually incorrect outputs. A notable trend is the exploration of novel fine-tuning strategies that aim to disentangle the learning of skills and knowledge, thereby reducing the impact of knowledge inconsistency during fine-tuning. This approach, often combined with the use of fictitious synthetic data, has shown promising results in enhancing the factuality of LLM outputs. Additionally, there is a growing interest in understanding and mitigating hallucinations through inter-model interactions, such as debate-driven experiments, which have demonstrated potential in improving the accuracy and robustness of LLM outputs. Another significant development is the investigation into the persistence and erasure of fabricated knowledge within LMs, revealing insights into the robustness of injected facts and the potential for multi-step sparse updates to mitigate data poisoning effects. Lastly, the distinction between ignorance-based and error-based hallucinations is being emphasized, with new methodologies emerging to better detect and address these distinct types of hallucinations.