Enhanced Visual-Inertial Navigation and Mapping Systems

Advances in Visual-Inertial Navigation and Mapping

Recent developments in the field of visual-inertial navigation and mapping systems have significantly advanced the capabilities of autonomous systems, particularly in challenging environments where traditional methods fall short. The integration of advanced segmentation techniques, multi-modal sensor fusion, and novel computational methods has led to more robust and accurate solutions. Key innovations include the enhancement of motion segmentation for improved structure-from-motion, the incorporation of multiple motion models in SLAM systems, and the use of neural radiance fields for more adaptable SLAM in dynamic outdoor settings.

Noteworthy Papers:

  • RoMo: Robust Motion Segmentation Improves Structure from Motion: Introduces a novel iterative method for motion segmentation that significantly enhances camera calibration in dynamic scenes.
  • Visual SLAMMOT Considering Multiple Motion Models: Proposes a unified SLAMMOT methodology that considers multiple motion models, bridging the gap between LiDAR and vision-based sensing.
  • GMS-VINS: Multi-category Dynamic Objects Semantic Segmentation for Enhanced Visual-Inertial Odometry: Integrates an enhanced SORT algorithm with a robust multi-category segmentation framework to improve VIO accuracy in diverse dynamic environments.
  • NeRF and Gaussian Splatting SLAM in the Wild: Evaluates deep learning-based SLAM methods in natural outdoor environments, highlighting their superior robustness under challenging conditions.

Sources

RoMo: Robust Motion Segmentation Improves Structure from Motion

Visual SLAMMOT Considering Multiple Motion Models

GMS-VINS:Multi-category Dynamic Objects Semantic Segmentation for Enhanced Visual-Inertial Odometry Using a Promptable Foundation Model

A Visual-inertial Localization Algorithm using Opportunistic Visual Beacons and Dead-Reckoning for GNSS-Denied Large-scale Applications

Real-Time Metric-Semantic Mapping for Autonomous Navigation in Outdoor Environments

SF-Loc: A Visual Mapping and Geo-Localization System based on Sparse Visual Structure Frames

ROVER: A Multi-Season Dataset for Visual SLAM

Quaternion-based Unscented Kalman Filter for 6-DoF Vision-based Inertial Navigation in GPS-denied Regions

Adaptive LiDAR Odometry and Mapping for Autonomous Agricultural Mobile Robots in Unmanned Farms

NeRF and Gaussian Splatting SLAM in the Wild

Large-Scale Dense 3D Mapping Using Submaps Derived From Orthogonal Imaging Sonars

Multi-cam Multi-map Visual Inertial Localization: System, Validation and Dataset

Built with on top of