The field of diffusion models is currently experiencing significant advancements, particularly in the areas of acceleration, alignment, and efficiency. A notable trend is the development of techniques to speed up the generation process without compromising the quality of the output. This includes innovative approaches to speculative sampling and training-free solutions for aligning models with specific objectives, which not only enhance performance but also maintain the versatility of diffusion models. Additionally, there is a growing focus on improving the training efficiency of these models through novel methods that link discrete-time policies with continuous-time diffusion samplers, offering faster training times and reduced computational costs. Another key area of progress is in the pruning of sparse diffusion models, where iterative methods based on gradient flow are being employed to maintain generation quality while significantly reducing inference speeds and computational expenses. These developments collectively push the boundaries of what diffusion models can achieve, making them more accessible and applicable to a wider range of practical applications.
Noteworthy Papers
- Accelerated Diffusion Models via Speculative Sampling: Introduces a novel approach to speculative sampling for diffusion models, significantly reducing the number of function evaluations required for generation.
- Alignment without Over-optimization: Training-Free Solution for Diffusion Models: Presents a training-free sampling method that effectively aligns diffusion models with specific objectives without compromising their general capabilities.
- From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training: Demonstrates a method for faster training of diffusion models by linking discrete-time policies with continuous-time diffusion samplers.
- Pruning for Sparse Diffusion Models based on Gradient Flow: Offers an iterative pruning method that maintains generation quality while improving efficiency, based on gradient flow.
- Reward-Guided Controlled Generation for Inference-Time Alignment in Diffusion Models: Tutorial and Review: Provides a comprehensive overview of inference-time guidance and alignment methods, including novel algorithms and their applications.