The field of 3D reconstruction and novel view synthesis is rapidly advancing, with a focus on improving the accuracy and efficiency of scene reconstruction from sparse and unposed views. Recent developments have led to the proposal of novel methods, such as geometric consistent ray diffusion, large reconstruction modeling, and Gaussian splatting, which have shown promising results in reconstructing complex scenes and generating high-quality novel views. Notably, the integration of diffusion models and multi-view optimization techniques has enabled the development of more robust and generalizable methods.
Some of the key trends in this area include the use of uncertainty-aware models, such as GaussianLSS, which can capture object extents and provide uncertainty estimates for depth perception. Additionally, the development of scalable architectures, such as CityGS-X, has enabled the efficient processing of large-scale scenes and the generation of high-quality novel views.
The papers that are particularly noteworthy in this regard include GCRayDiffusion, which proposes a novel geometric consistent ray diffusion model for pose-free surface reconstruction, and FreeSplat++, which extends the generalizable 3D Gaussian Splatting to large-scale indoor whole-scene reconstruction. Other notable papers include EndoLRMGS, which combines large reconstruction modeling and Gaussian splatting for complete surgical scene reconstruction, and DiET-GS, which leverages event streams and diffusion priors for motion deblurring 3D Gaussian splatting.