Advances in Large Language Model Evaluation and Applications

The field of large language models (LLMs) is rapidly evolving, with a focus on improving evaluation methods, applications, and reliability. Recent developments highlight the importance of comprehensive evaluation frameworks, such as agent-based approaches, to assess LLM performance in various tasks, including code generation, clinical diagnosis, and social simulation. Noteworthy papers in this area include CodeVisionary, which proposes a novel agent-based framework for evaluating LLMs in code generation, and Med-CoDE, which introduces a critique-based evaluation framework for medical LLMs. Additionally, research on LLM-driven NPCs, cross-platform dialogue systems, and social simulation platforms, such as BookWorld and SOTOPIA-S4, demonstrates the potential of LLMs in interactive applications and creative story generation.

Sources

CodeVisionary: An Agent-based Framework for Evaluating Large Language Models in Code Generation

LLM Sensitivity Evaluation Framework for Clinical Diagnosis

LLM-Driven NPCs: Cross-Platform Dialogue System for Games and Social Platforms

CPR: Leveraging LLMs for Topic and Phrase Suggestion to Facilitate Comprehensive Product Reviews

FAIRGAME: a Framework for AI Agents Bias Recognition using Game Theory

BookWorld: From Novels to Interactive Agent Societies for Creative Story Generation

EvalAgent: Discovering Implicit Evaluation Criteria from the Web

Evaluating Judges as Evaluators: The JETTS Benchmark of LLM-as-Judges as Test-Time Scaling Evaluators

Med-CoDE: Medical Critique based Disagreement Evaluation Framework

AGI Is Coming... Right After AI Learns to Play Wordle

SOTOPIA-S4: a user-friendly system for flexible, customizable, and large-scale social simulation

Leveraging LLMs as Meta-Judges: A Multi-Agent Framework for Evaluating LLM Judgments

Built with on top of