The recent advancements in the field of Vision-Language Navigation (VLN) and Object Goal Navigation (ObjectNav) have shown significant progress towards more sophisticated and cognitive-inspired models. Researchers are increasingly focusing on integrating cognitive processes, large language models (LLMs), and innovative navigation strategies to enhance the efficiency and adaptability of navigation systems. Notably, the introduction of cognitive modeling in ObjectNav, as seen in CogNav, has demonstrated human-like navigation behaviors and significant improvements in performance benchmarks. Additionally, the exploration of mapless navigation techniques, such as ALC-ON, highlights a shift towards more flexible and scalable solutions for long-distance autonomy. Other notable contributions include the development of abstract top-down maps for maze navigation and the creation of large-scale annotated VLN corpora, which provide valuable resources for training and evaluating navigation models. These developments collectively push the boundaries of what is possible in embodied AI, offering new possibilities for real-world applications and further research opportunities.