Advances in Diffusion Models

The field of diffusion models is moving towards improving the efficiency and accuracy of image and video generation. Researchers are exploring new techniques to enhance the performance of classifier-free guidance, such as tangential damping and optimized scaling. Additionally, there is a growing interest in learning flexible interpolants to enable faster generation. Another area of focus is on improving the unconditional priors used in conditional generation, which can significantly impact the quality of the generated outputs. These advancements have the potential to significantly improve the state-of-the-art in image and video synthesis. Noteworthy papers include: Guidance Free Image Editing via Explicit Conditioning, which introduces a novel conditioning technique to reduce computational costs. TCFG: Tangential Damping Classifier-free Guidance, which proposes a geometric perspective on the unconditional score to enhance CFG performance. CFG-Zero*, which improves CFG with optimized scale and zero-init. Unconditional Priors Matter!, which highlights the importance of using better unconditional noise predictions in conditional generation. Learning Straight Flows by Learning Curved Interpolants, which proposes to learn flexible interpolants to enable faster generation.

Sources

Bezier Distillation

Guidance Free Image Editing via Explicit Conditioning

TCFG: Tangential Damping Classifier-free Guidance

CFG-Zero*: Improved Classifier-Free Guidance for Flow Matching Models

Unconditional Priors Matter! Improving Conditional Generation of Fine-Tuned Diffusion Models

Learning Straight Flows by Learning Curved Interpolants

Built with on top of