The field of diffusion models is moving towards improving the efficiency and accuracy of image and video generation. Researchers are exploring new techniques to enhance the performance of classifier-free guidance, such as tangential damping and optimized scaling. Additionally, there is a growing interest in learning flexible interpolants to enable faster generation. Another area of focus is on improving the unconditional priors used in conditional generation, which can significantly impact the quality of the generated outputs. These advancements have the potential to significantly improve the state-of-the-art in image and video synthesis. Noteworthy papers include: Guidance Free Image Editing via Explicit Conditioning, which introduces a novel conditioning technique to reduce computational costs. TCFG: Tangential Damping Classifier-free Guidance, which proposes a geometric perspective on the unconditional score to enhance CFG performance. CFG-Zero*, which improves CFG with optimized scale and zero-init. Unconditional Priors Matter!, which highlights the importance of using better unconditional noise predictions in conditional generation. Learning Straight Flows by Learning Curved Interpolants, which proposes to learn flexible interpolants to enable faster generation.