Optimization and Machine Learning

Report on Recent Developments in Optimization and Machine Learning

General Trends and Innovations

The recent advancements in the field of optimization and machine learning are marked by a significant shift towards more efficient and scalable methods, particularly in the areas of hyperparameter optimization (HPO), neural architecture search (NAS), and multiobjective optimization (MOP). The focus is increasingly on adaptive and gradient-based approaches that can handle complex, high-dimensional problems with minimal computational overhead.

  1. Adaptive Fidelity in Optimization: A notable trend is the development of methods that adaptively determine the fidelity of optimization processes. This is particularly evident in the extension of Bayesian optimization (BO) to multi-fidelity settings. These methods dynamically adjust the fidelity of each hyperparameter configuration to optimize the surrogate model, thereby improving efficiency and performance. The adaptive identification of fidelity levels allows for more precise and rapid convergence, making these techniques highly suitable for large-scale applications.

  2. Landscape-Aware Algorithm Configuration: There is a growing emphasis on landscape-aware approaches to algorithm configuration, where the characteristics of the optimization landscape are used to guide the selection and tuning of algorithms. This involves training predictive models on diverse sets of problem instances, such as randomly generated functions, to improve the generalizability and robustness of the configurations. Neural network models are being increasingly employed for these tasks, demonstrating superior performance in identifying near-optimal configurations across various problem dimensions.

  3. Gradient-Based Multiobjective Optimization: The field of multiobjective optimization is witnessing a shift from evolutionary algorithms to gradient-based methods, which leverage higher-order information to optimize multiple objectives simultaneously. These methods are particularly advantageous for large-scale models, where evolutionary approaches struggle due to their computational intensity. The introduction of gradient-based libraries is facilitating more efficient and scalable solutions to multiobjective problems, with applications in multi-task learning and fairness constraints.

  4. Bilevel Multi-objective Optimization: The challenge of bilevel optimization with multiple objectives at both levels is being addressed through innovative techniques that reduce computational costs. One promising approach involves predicting the lower-level Pareto set directly, rather than optimizing from scratch, thereby significantly reducing the number of function evaluations required. This method, combined with neural network-based mapping, shows promise in handling complex bilevel problems efficiently.

Noteworthy Papers

  • FastBO: Introduces an adaptive fidelity identification strategy that extends any single-fidelity method to the multi-fidelity setting, highlighting its generality and applicability.
  • LibMOON: The first multiobjective optimization library to support gradient-based methods, providing a fair benchmark and open-sourcing for the community.
  • Pareto Set Prediction Assisted Bilevel Multi-objective Optimization: Proposes a novel approach to reduce computational costs in bilevel multi-objective optimization by predicting the lower-level Pareto set directly.

Sources

FastBO: Fast HPO and NAS with Adaptive Fidelity Identification

Landscape-Aware Automated Algorithm Configuration using Multi-output Mixed Regression and Classification

LibMOON: A Gradient-based MultiObjective OptimizatioN Library in PyTorch

Pareto Set Prediction Assisted Bilevel Multi-objective Optimization