The recent advancements in the field of simultaneous localization and mapping (SLAM) and augmented reality (AR) have shown a significant shift towards integrating multi-modal data sources and leveraging deep learning techniques for enhanced performance and robustness. Researchers are increasingly focusing on developing systems that can handle real-time processing demands while maintaining high accuracy, particularly in resource-constrained environments such as mobile platforms and AR devices. Innovations in depth estimation, particularly for AR applications, are moving towards hardware-friendly models that reduce latency and improve accuracy by eliminating time-consuming preprocessing steps. Additionally, the integration of LiDAR and visual data for 3D reconstruction and SLAM in outdoor environments is gaining traction, with methods that leverage differentiable spatial representations to overcome traditional limitations. Indoor SLAM systems are also evolving, with new approaches that use mixed reality to visualize and correct errors in real-time, improving the quality of 3D reconstructions. Furthermore, the development of robust estimation techniques with provable error bounds is addressing the challenges of limited landmark visibility and computational efficiency in SLAM. The introduction of realistic datasets for evaluating SLAM in crowded indoor environments underscores the need for more robust and adaptable SLAM solutions tailored for human navigation scenarios.
Noteworthy papers include one that introduces a novel multi-user positioning system for AR, integrating monocular SLAM with deep learning for occlusion handling, and another that presents a LiDAR-visual SLAM system using 3D Gaussian splatting, demonstrating superior performance in outdoor environments.