Generative Modeling

Report on Current Developments in Generative Modeling

General Direction of the Field

The field of generative modeling is currently witnessing a surge of innovative approaches aimed at enhancing the performance, interpretability, and usability of models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). A key trend is the integration of interactive and visual tools to better understand and manipulate the latent spaces of these models, making them more accessible to practitioners and researchers alike. Additionally, there is a growing focus on addressing the limitations of existing models, such as the blurriness in VAE outputs and the codebook collapse issues in Vector Quantization (VQ) models.

One of the significant advancements is the exploration of alternative representations for the latent space, moving beyond the traditional discrete assumptions. This includes the introduction of sparse representations and dictionary-based approaches, which offer more expressive and meaningful latent space encodings. These methods not only improve the quality of generated images but also provide a more robust framework for handling high-dimensional and longitudinal data.

Quantum computing is also making inroads into generative modeling, with hybrid models that combine classical and quantum approaches to leverage the strengths of both. These hybrid models are showing promise in generating high-resolution images with improved quality and diversity, opening up new possibilities for the field.

Noteworthy Papers

  • Beta-Sigma VAE: This paper addresses the blurriness problem in VAEs by separating the decoder variance and β parameters, leading to superior performance in image synthesis.

  • LASERS: The introduction of sparse representations in the latent space offers a more expressive and robust alternative to traditional VQ approaches, addressing common issues like codebook collapse.

  • VAE-QWGAN: The integration of classical VAEs with quantum GANs demonstrates improved image generation quality and diversity, marking a significant step forward in hybrid quantum generative models.

Sources

VAE Explainer: Supplement Learning Variational Autoencoders with Interactive Visualization

Beta-Sigma VAE: Separating beta and decoder variance in Gaussian variational autoencoder

Towards Kinetic Manipulation of the Latent Space

LASERS: LAtent Space Encoding for Representations with Sparsity for Generative Modeling

VAE-QWGAN: Improving Quantum GANs for High Resolution Image Generation

Latent mixed-effect models for high-dimensional longitudinal data

Built with on top of