Optimization and Inverse Problem Techniques

Report on Current Developments in the Research Area

General Direction of the Field

The recent advancements in the research area are notably focused on enhancing the efficiency and robustness of optimization and inverse problem methodologies, particularly through the integration of stochastic and randomized techniques. The field is witnessing a shift towards more adaptive and computationally efficient algorithms that can handle complex, high-dimensional problems with greater accuracy and reduced computational cost. This trend is driven by the need to address real-world challenges where data is often noisy, incomplete, or arrives sequentially.

One of the key innovations is the development of stochastic versions of established optimization methods, such as the Gauss-Newton method. These stochastic extensions aim to leverage mini-batching and randomized sketching to improve both the accuracy and computational efficiency of reconstructions. The convergence analysis and error decomposition of these new algorithms are being rigorously studied, providing a solid theoretical foundation for their practical applications.

Another significant development is the exploration of deflation techniques to systematically find multiple local minima in nonlinear least squares problems. These techniques are particularly useful in applications like the inverse eigenvalue problem, where identifying all local minima can provide deeper insights into the underlying physical and chemical processes.

Trust-region methods are also being advanced to handle stochastic optimization problems with deterministic constraints. These methods are designed to find both first- and second-order stationary points, addressing the challenges posed by saddle points and the Maratos effect. The incorporation of random models and novel step computations is enhancing the robustness and convergence properties of these algorithms.

Lastly, there is a growing interest in preserving the positivity of the Gauss-Newton Hessian through random sampling. This approach is crucial for ensuring the numerical stability and reconstructability of unknown parameters in inverse problems. Techniques from randomized linear algebra are being employed to develop efficient down-sampling strategies that maintain the strict positivity of the Hessian, thereby improving the overall performance of the optimization process.

Noteworthy Papers

  • Stochastic Iteratively Regularized Gauss-Newton Method: Introduces a novel algorithm that significantly reduces computational time while maintaining high accuracy in reconstruction tasks.
  • Trust-Region Sequential Quadratic Programming for Stochastic Optimization: Demonstrates superior performance over existing methods by addressing both first- and second-order stationary points in stochastic optimization problems.
  • Preserving Positivity of Gauss-Newton Hessian through Random Sampling: Proposes an innovative down-sampling strategy that enhances the numerical stability of inverse problems by preserving the positivity of the Hessian.

Sources

A Stochastic Iteratively Regularized Gauss-Newton Method

Deflation Techniques for Finding Multiple Local Minima of a Nonlinear Least Squares Problem

Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models

Preserving positivity of Gauss-Newton Hessian through random sampling

Built with on top of