Efficient and Robust Techniques in Novel View Synthesis and 3D Reconstruction

Current Trends in Novel View Synthesis and 3D Reconstruction

Recent advancements in novel view synthesis (NVS) and 3D reconstruction have seen a significant shift towards more efficient and robust methods, particularly leveraging Gaussian Splatting and Neural Radiance Fields (NeRF). The field is moving towards addressing the challenges of sparse input data and improving generalization across diverse scenes and conditions. Innovations in self-ensembling techniques, multi-stage training, and structure-preserving regularization are enhancing the quality and consistency of novel view synthesis, even with limited training data. Additionally, there is a growing focus on practical applications, such as wearable systems for human mesh reconstruction and robotics, where methods are being developed to handle real-world complexities and variability.

Noteworthy developments include:

  • Self-Ensembling Gaussian Splatting: Introduces a novel approach to mitigate overfitting in sparse view scenarios, significantly improving NVS quality.
  • Argus: Pioneers a compact, wearable system for multi-view egocentric human mesh reconstruction, demonstrating robustness and practicality.
  • MVSplat360: Combines 3D Gaussian Splatting with video diffusion models for high-quality, 360-degree NVS from sparse views.

Sources

Self-Ensembling Gaussian Splatting for Few-shot Novel View Synthesis

Argus: Multi-View Egocentric Human Mesh Reconstruction Based on Stripped-Down Wearable mmWave Add-on

FewViewGS: Gaussian Splatting with Few View Matching and Multi-stage Training

NeRF-Aug: Data Augmentation for Robotics with Neural Radiance Fields

CAD-NeRF: Learning NeRFs from Uncalibrated Few-view Images by CAD Model Retrieval

Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis

GANESH: Generalizable NeRF for Lensless Imaging

MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views

Built with on top of