Methodologies in Stochastic Processes and Bayesian Inference

Report on Current Developments in the Research Area

General Direction of the Field

The recent advancements in the research area are characterized by a strong emphasis on developing novel methodologies and theoretical frameworks that address the limitations and challenges inherent in existing models. A significant trend is the integration of stochastic processes, Bayesian inference, and deep learning techniques to enhance the accuracy, efficiency, and applicability of various computational methods. This integration is particularly evident in the development of new algorithms for data assimilation, Bayesian computation, and probabilistic regression, which aim to overcome the limitations of traditional approaches.

One of the key areas of focus is the improvement of stochastic gradient methods, particularly in the context of continuous-time observations and multiscale systems. Researchers are exploring ways to enhance the accuracy of drift estimation in such systems by incorporating filtering techniques and theoretical guarantees of asymptotic unbiasedness. This direction is crucial for advancing the field of stochastic control and optimization, where accurate drift identification is essential for robust decision-making.

Another notable trend is the advancement in Bayesian computation and inference, where generative diffusion models and Monte Carlo methods are being refined to handle large-scale inverse problems more efficiently. The introduction of multilevel Monte Carlo strategies and novel sampling algorithms is reducing computational costs while maintaining high accuracy, which is particularly beneficial in computational imaging and other high-dimensional applications.

In the realm of probabilistic regression, there is a growing interest in developing scoring rules that extend beyond mean target prediction, especially in multivariate settings. The proposed Conditional CRPS scoring rule is a significant innovation, offering closed-form expressions and sensitivity to correlation, which addresses the limitations of traditional maximum likelihood estimation.

The field is also witnessing advancements in the theoretical analysis of Gaussian processes, particularly in handling heteroscedastic noise. The development of exact posterior distributions for heteroscedastic Gaussian processes is a notable achievement, providing a robust framework for data-driven system identification and control.

Noteworthy Papers

  1. Stochastic gradient descent in continuous time for drift identification in multiscale diffusions: This paper introduces a novel filtering approach to solve the misspecification issue in drift estimation, offering both theoretical guarantees and practical numerical experiments.

  2. Scoring rule nets: beyond mean target prediction in multivariate regression: The proposal of Conditional CRPS as a multivariate scoring rule is a significant advancement, demonstrating superior performance in both synthetic and real-world datasets.

  3. Bayesian computation with generative diffusion models by Multilevel Monte Carlo: The multilevel Monte Carlo strategy significantly reduces computational costs in Bayesian imaging problems, offering a practical solution for large-scale inverse problems.

  4. Functional Stochastic Gradient MCMC for Bayesian Neural Networks: The introduction of functional SGMCMC addresses the limitations of parameter-space variational inference, offering improved accuracy and uncertainty quantification in Bayesian neural networks.

Sources

Stochastic gradient descent in continuous time for drift identification in multiscale diffusions

Theoretical Analysis of Heteroscedastic Gaussian Processes with Posterior Distributions

Scoring rule nets: beyond mean target prediction in multivariate regression

A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting

A competitive baseline for deep learning enhanced data assimilation using conditional Gaussian ensemble Kalman filtering

Neural Control Variates with Automatic Integration

Bayesian computation with generative diffusion models by Multilevel Monte Carlo

Harmonic Path Integral Diffusion

Non-asymptotic convergence analysis of the stochastic gradient Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with applications to training of ReLU neural networks

Feynman-Kac Formula for Nonlinear Schrödinger Equations with Applications in Numerical Approximations

Functional Stochastic Gradient MCMC for Bayesian Neural Networks

Built with on top of