The recent advancements in the field of 3D scene reconstruction and novel view synthesis have seen a shift towards more flexible and efficient representations. Researchers are increasingly focusing on methods that not only improve the accuracy of surface reconstruction but also enhance the rendering speed and quality. The use of novel primitives such as smooth convexes and Gaussian surfels has shown promise in capturing complex geometries and hard edges more effectively than traditional Gaussian representations. Additionally, integrating implicit surface representations with explicit primitives has been a notable trend, enabling better alignment with scene surfaces and improving reconstruction quality. The field is also witnessing innovations in optimizing neural signed distance functions and leveraging numerical gradients for more stable and detailed surface reconstruction from point clouds. Furthermore, the development of differentiable inverse rendering techniques with interpretable basis BRDFs is advancing the ability to reconstruct accurate geometry and material properties from images. These developments collectively point towards a future where high-quality, efficient, and flexible 3D scene reconstruction becomes more accessible and accurate.