The recent advancements in video generation and editing have shown significant progress in enhancing the quality, consistency, and control over generated content. Key developments include the integration of point tracking into video diffusion models to reduce appearance drift and improve temporal coherence, as well as the introduction of semi-supervised learning frameworks that leverage both real and synthetic data to improve multi-view stereo performance. Additionally, there has been a focus on enabling precise camera control in video diffusion models without the need for fine-tuning, and on simulating world inconsistencies to enhance view synthesis robustness. Style control in video generation has also seen improvements with methods that emphasize both global and local textures to achieve high-quality stylized videos. Furthermore, the unification of image generation and editing tasks under a single framework that leverages video dynamics has opened new possibilities for universal image manipulation. Notably, the field has also seen the development of no-reference quality assessment methods tailored for scenes generated by neural view synthesis and NeRF variants, addressing the limitations of traditional quality assessment metrics. Overall, these innovations are pushing the boundaries of what is possible in video and image generation, offering more control, higher quality, and greater consistency in generated content.