Large Language Models in Computational Research

The field of computational research is witnessing significant advancements with the integration of large language models (LLMs). A notable trend is the exploration of LLMs in automating complex tasks such as computational fluid dynamics (CFD) and high-performance computing (HPC). Researchers are investigating the potential of LLMs in generating efficient code, adjusting parameters, and solving intricate problems. However, the current state of LLMs in these areas also highlights the need for expert supervision and further development to achieve full automation. Another area of interest is the application of LLMs in combinatorial optimization and multiphysics reasoning, where benchmark suites are being developed to systematically evaluate their performance. These benchmarks reveal both the strengths and limitations of current approaches, pointing to promising directions for future research. Overall, the integration of LLMs in computational research holds great promise for advancing automation and solving complex problems. Noteworthy papers include: CO-Bench, which introduces a comprehensive benchmark suite for evaluating LLM agents in combinatorial optimization. FEABench, which evaluates the ability of LLMs to simulate and solve physics, mathematics, and engineering problems using finite element analysis.

Sources

A Status Quo Investigation of Large Language Models towards Cost-Effective CFD Automation with OpenFOAMGPT: ChatGPT vs. Qwen vs. Deepseek

LLM & HPC:Benchmarking DeepSeek's Performance in High-Performance Computing Tasks

CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization

FEABench: Evaluating Language Models on Multiphysics Reasoning Ability

Built with on top of