The field of large language models (LLMs) is experiencing significant advancements in optimization techniques. Researchers are exploring innovative methods to improve the performance and efficiency of LLMs, including the use of Gaussian processes, Bayesian optimization, and hyperparameter tuning. These approaches aim to address the challenges of optimizing LLMs, such as their large size and complexity, and have shown promising results in improving the discovery rate of high-performing reactions and reducing computational overhead. Noteworthy papers include GOLLuM, which reframes LLM finetuning as Gaussian process marginal likelihood optimization, and Optuna vs Code Llama, which investigates the viability of using LLMs for hyperparameter optimization. These developments have the potential to significantly advance the field of LLMs and enable more efficient and effective optimization of these complex models.