The field of video generation is rapidly advancing, with a focus on controllable and customizable video synthesis. Recent developments have enabled the generation of high-quality videos with precise control over appearance, motion, and camera movements. This has opened up new possibilities for applications such as video editing, animation, and gaming. Notably, innovative approaches have been proposed to address challenges such as concept interference, appearance contamination, and limited control capabilities. These advancements have significant implications for the field, enabling more efficient and effective video generation and editing. Noteworthy papers include SketchVideo, which achieves sketch-based spatial and motion control for video generation and editing, and JavisDiT, which introduces a novel joint audio-video diffusion transformer for synchronized audio-video generation. On-device Sora enables efficient and high-quality video generation on mobile devices, while ConMo proposes a zero-shot framework for controllable motion disentanglement and recomposition.