The recent advancements in the field of Large Language Models (LLMs) have primarily focused on enhancing their factual accuracy and reducing hallucinations. A significant trend observed is the integration of Knowledge Graphs (KGs) as an additional modality to augment LLMs, thereby improving their ability to generate contextually accurate responses. This approach leverages the structured nature of KGs to provide a reliable source of factual information, which is then fused with the generative capabilities of LLMs. Additionally, there is a growing emphasis on developing methods for fine-grained confidence calibration and self-correction at the fact level, enabling LLMs to assess and rectify their outputs more accurately. Another notable direction is the exploration of neurosymbolic methods that combine the strengths of LLMs with formal semantic structures, aiming to enhance the models' reasoning capabilities in complex, real-world scenarios. Furthermore, advancements in numerical reasoning for KGs and the application of LLMs in group POI recommendations highlight the versatility and potential of these models across diverse domains. Overall, the field is moving towards more structured, reliable, and interpretable models that can better serve a wide range of applications.