The field of 3D scene reconstruction and rendering is rapidly advancing, with a focus on improving the accuracy and efficiency of methods for reconstructing and rendering complex scenes. Recent research has explored the use of Gaussian Splatting and Neural Radiance Fields for scene reconstruction, as well as the development of new benchmarks and datasets for evaluating the performance of these methods. One of the key directions in this field is the development of methods that can handle complex and dynamic scenes, such as those with moving objects or changing lighting conditions. Another important area of research is the development of methods for reconstructing and rendering scenes from sparse or noisy data, such as that obtained from Time-of-Flight sensors or casually captured videos. Notable papers in this area include BEV-GS, which proposes a real-time single-frame road surface reconstruction method, and MoBGS, which presents a novel deblurring dynamic 3D Gaussian Splatting framework for reconstructing sharp and high-quality novel spatio-temporal views from blurry monocular videos. Additionally, papers such as SLAM&Render and ToF-Splatting have introduced new benchmarks and methods for evaluating and improving the performance of SLAM and scene reconstruction algorithms.