Optimization Research

Report on Current Developments in Optimization Research

General Direction of the Field

The recent developments in the optimization research field indicate a significant shift towards more sophisticated and adaptive methods that address complex problem structures and uncertainties. The focus is on advancing algorithms that can handle nonconvex, nonsmooth, and stochastic optimization problems, which are prevalent in various scientific and engineering applications. The field is witnessing a surge in the integration of machine learning techniques with traditional optimization methods, aiming to enhance exploration, learning, and convergence properties.

One of the key trends is the formulation of optimization problems using probabilistic and stochastic models, which allows for more robust and adaptive solutions. This approach is particularly evident in the use of Monte Carlo methods, multi-level Monte Carlo gradient methods, and stochastic compositional minimax optimization. These methods are designed to handle biased oracles, high computational costs, and complex compositional structures, making them suitable for a wide range of applications from machine learning to operations research.

Another significant development is the exploration of minimax optimization problems, which are critical in areas like generative adversarial networks (GANs) and robust optimization. The research is advancing towards finding efficient algorithms that can converge to stationary points in nonconvex-concave and nonconvex-strongly-concave settings. This includes the use of two-timescale gradient descent ascent (TTGDA) algorithms and stochastic compositional minimax optimization methods.

Noteworthy Papers

  1. Exploratory Optimal Stopping: A Singular Control Formulation - This paper introduces a regularized optimal stopping problem using cumulative residual entropy, providing a unique optimal exploratory strategy and a reinforcement learning algorithm based on policy iteration.
  2. Multi-level Monte-Carlo Gradient Methods for Stochastic Optimization with Biased Oracles - This work presents a family of MLMC gradient methods that significantly improve the best-known complexities for conditional stochastic optimization and shortfall risk optimization, demonstrating superior performance in various applications.
  3. Stochastic Compositional Minimax Optimization with Provable Convergence Guarantees - The proposed CODA algorithm addresses stochastic compositional minimax problems in various settings, providing convergence guarantees and setting a new benchmark for nonconvex-strongly-concave and nonconvex-concave compositional minimax problems.

These papers represent innovative advancements in the field, offering novel methodologies and theoretical insights that are likely to influence future research and practical applications.

Sources

Exploratory Optimal Stopping: A Singular Control Formulation

Empirical risk minimization for risk-neutral composite optimal control with applications to bang-bang control

Multi-level Monte-Carlo Gradient Methods for Stochastic Optimization with Biased Oracles

A Markovian Model for Learning-to-Optimize

Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization

Stochastic Compositional Minimax Optimization with Provable Convergence Guarantees

Zeroth-Order Stochastic Mirror Descent Algorithms for Minimax Excess Risk Optimization

Stable Formulations in Optimistic Bilevel Optimization