The field of 3D reconstruction and rendering is rapidly advancing, with a focus on developing innovative methods for inverse rendering, scene reconstruction, and pose estimation. Recent research has explored the use of neural networks and deep learning techniques to improve the accuracy and efficiency of these methods. One notable trend is the development of new importance sampling techniques, such as those using normalizing flows, to reduce variance and enhance the efficiency of Monte Carlo sampling. Additionally, researchers are investigating the use of multimodal data and heterogeneous sensor datasets to improve the robustness and generalization of 3D reconstruction and rendering algorithms. Noteworthy papers include TensoFlow, which proposes a generic approach for sampler learning in inverse rendering, and NeRFPrior, which adopts a neural radiance field as a prior for indoor scene reconstruction. Other notable papers include AIM2PC, which presents a novel methodology for aerial image to 3D building point cloud reconstruction, and GaussianUDF, which introduces a novel approach to inferring unsigned distance functions through 3D Gaussian splatting.
Advances in 3D Reconstruction and Rendering
Sources
BADGR: Bundle Adjustment Diffusion Conditioned by GRadients for Wide-Baseline Floor Plan Reconstruction
MultimodalStudio: A Heterogeneous Sensor Dataset and Framework for Neural Rendering across Multiple Imaging Modalities