Advances in Autonomous Driving Scene Representation

The field of autonomous driving scene representation is moving towards more accurate and realistic renderings of complex scenes. Recent developments have focused on improving the fidelity of generated data, particularly in terms of structured elements such as the ground surface. Researchers are exploring new paradigms that combine reconstructive and generative models to achieve more accurate and efficient scene representations. Notably, the use of homogeneous coordinates and 3D Gaussian splatting has shown promising results in enhancing the rendering of distant objects and unbounded outdoor environments. Additionally, there is a growing interest in developing methods that can handle diverse environmental conditions and dynamic elements, such as weather and lighting, in a physically accurate manner. Noteworthy papers include: ReconDreamer++ which proposes a framework that significantly improves the overall rendering quality by mitigating the domain gap and refining the representation of the ground surface. HoGS which introduces a unified representation for enhancing near and distant objects using homogeneous coordinates. EVolSplat which presents an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner. StyledStreets which enables photorealistic style transfers across seasons, weather conditions, and camera setups. RainyGS which generates photorealistic, dynamic rain effects in open-world scenes with physical accuracy.

Sources

ReconDreamer++: Harmonizing Generative and Reconstructive Models for Driving Scene Representation

HoGS: Unified Near and Far Object Reconstruction via Homogeneous Gaussian Splatting

EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis

StyledStreets: Multi-style Street Simulator with Spatial and Temporal Consistency

RainyGS: Efficient Rain Synthesis with Physically-Based Gaussian Splatting

Built with on top of