The field of neural networks is rapidly advancing, with a focus on improving the approximation capabilities of various architectures. Recent research has demonstrated that transformers can overcome the curse of dimensionality, and new techniques have been proposed to enhance the expressive power of these models. For instance, the use of normalizing flows and autoregressive models has shown promise in improving the performance of Bayesian optimization and generative modeling. Furthermore, the development of conformalized methods has enabled the provision of uncertainty quantification and statistical guarantees in synthetic data generation. Noteworthy papers include: Transformers Can Overcome the Curse of Dimensionality, which investigates the approximation of Holder continuous functions by transformers. Latent Bayesian Optimization via Autoregressive Normalizing Flows, which proposes a normalizing flow-based approach to Bayesian optimization. Statistical Guarantees in Synthetic Data through Conformal Adversarial Generation, which presents a framework for generating synthetic data with provable statistical guarantees.
Advances in Neural Network Approximation and Generative Models
Sources
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Kolmogorov-Arnold Networks: Approximation and Learning Guarantees for Functions and their Derivatives
Conformalized-KANs: Uncertainty Quantification with Coverage Guarantees for Kolmogorov-Arnold Networks (KANs) in Scientific Machine Learning