Dynamic and Distractor-Free Gaussian Splatting

Advances in Dynamic and Distractor-Free Gaussian Splatting

Recent developments in the field of Gaussian Splatting have significantly advanced the capabilities of dynamic scene rendering and distractor-free 3D reconstruction. The focus has shifted towards creating more robust and efficient methods that can handle complex motions and transient objects without compromising on rendering speed or quality.

General Trends and Innovations:

  1. Dynamic Scene Rendering: There is a notable surge in techniques designed to render dynamic scenes with high temporal coherence. Methods are being developed to model complex motions and deformations of 3D Gaussians over time, ensuring smooth and realistic motion representation. This includes the integration of state-space modeling and Wasserstein geometry to maintain consistency and smoothness in dynamic scenes.

  2. Distractor-Free Reconstruction: The challenge of reconstructing static 3D scenes in the presence of transient objects or occluders is being addressed through innovative approaches that separate distractors from static elements. These methods leverage volume rendering and alpha compositing to achieve explicit scene separation without relying on external semantic information.

  3. Hybrid and Multi-Stage Approaches: A trend towards hybrid models and multi-stage training strategies is evident. These approaches combine 2D and 3D Gaussian representations to handle both static and transient elements more effectively. Additionally, hierarchical training and multi-source supervision are being employed to enhance the robustness and generalizability of 3D Gaussian Splatting models.

  4. Real-Time and Feed-Forward Models: There is a growing interest in developing real-time and feed-forward models that can reconstruct dynamic scenes from monocular videos. These models aim to achieve scalability and generalization by leveraging both static and dynamic scene datasets, enabling high-quality novel view synthesis in real-time.

Noteworthy Papers:

  • DeSplat: Introduces a novel method for explicit scene separation of static elements and distractors, achieving comparable results to prior distractor-free approaches without sacrificing rendering speed.
  • DynSUP: Proposes a method that uses only two images without prior poses to fit Gaussians in dynamic environments, leading to high-fidelity novel view synthesis while preserving temporal consistency.
  • RelayGS: Focuses on reconstructing dynamic scenes with large-scale and complex motions, outperforming state-of-the-art techniques by more than 1 dB in PSNR.
  • BTimer: Presents the first motion-aware feed-forward model for real-time reconstruction and novel view synthesis of dynamic scenes, achieving state-of-the-art performance on both static and dynamic datasets.

These papers represent significant strides in the field, addressing key challenges and pushing the boundaries of what is possible with Gaussian Splatting techniques.

Sources

DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering

Unleashing the Power of Data Synthesis in Visual Localization

T-3DGS: Removing Transient Objects for 3D Scene Reconstruction

Gaussians on their Way: Wasserstein-Constrained 4D Gaussian Splatting with State-Space Modeling

DynSUP: Dynamic Gaussian Splatting from An Unposed Image Pair

SfM-Free 3D Gaussian Splatting via Hierarchical Training

RelayGS: Reconstructing Dynamic Scenes with Large-Scale and Complex Motions via Relay Gaussians

RoDyGS: Robust Dynamic Gaussian Splatting for Casual Videos

Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos

HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting

DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction

Learnable Infinite Taylor Gaussian for Dynamic View Rendering

Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps

Built with on top of