The field of Novel View Synthesis (NVS) is rapidly advancing, with a clear trend towards enhancing the generalizability and scalability of models across diverse and large-scale environments. Recent developments focus on overcoming the limitations of current NVS methods, particularly in outdoor and dynamic scenes, and improving the quality assessment of synthesized views without relying on dense reference views or extensive human-labeled datasets. Innovations include the use of advanced data augmentation techniques, self-supervised learning for quality representation, and the exploration of human perception in dynamic scenes. These advancements aim to make NVS more applicable in real-world scenarios such as autonomous driving, virtual and augmented reality, and detailed 3D modeling.
Noteworthy papers include:
- A study introducing Aug3D, an augmentation technique that significantly improves the learning of feed-forward NVS models by generating well-conditioned novel views.
- NVS-SQA, a novel no-reference quality assessment method that outperforms existing methods by leveraging self-supervised learning and heuristic cues.
- MapGS, a framework that utilizes Gaussian splatting for effective dataset augmentation, demonstrating significant performance improvements in online mapping tasks.
- A comprehensive evaluation of human perception in dynamic scenes, providing valuable insights into the subjective quality assessment of NVS technologies and highlighting the limitations of existing objective metrics.