Advances in Dataset Distillation and Generative Modeling

The recent advancements in the research area of dataset distillation and generative modeling have shown significant progress, particularly in enhancing computational efficiency and output quality. The field is moving towards leveraging generative foundation models and diffusion techniques to achieve greater compression, higher quality of distilled data, and increased diversity in data representation. Innovations such as nested diffusion models and tiered GAN approaches are pushing the boundaries of what can be achieved with fewer computational resources. Additionally, the integration of explicit memory in generative modeling is addressing the computational demands of large neural networks, leading to more efficient training and sampling processes. These developments not only improve the robustness and efficiency of existing methods but also open new avenues for creative applications in fields like architectural design and artistic image generation. Notably, the introduction of scalable training data influence estimation for diffusion models and the optimization of stable diffusion frameworks are critical steps towards making these technologies more practical and accessible.

Sources

Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation

Decomposed Distribution Matching in Dataset Condensation

Applying Automatic Differentiation to Optimize Differential Microphone Array Designs

Birth and Death of a Rose

Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising

A Tiered GAN Approach for Monet-Style Image Generation

Open-Source Acceleration of Stable-Diffusion.cpp

Nested Diffusion Models Using Hierarchical Latent Priors

Generating floorplans for various building functionalities via latent diffusion model

Diffusing Differentiable Representations

DMin: Scalable Training Data Influence Estimation for Diffusion Models

Generative Modeling with Explicit Memory

Built with on top of