Advancements in Large Language Models

The field of large language models (LLMs) is rapidly evolving, with a focus on improving their performance, flexibility, and ability to handle diverse real-world scenarios. Recent developments have centered around enhancing the dialogue capabilities of LLMs, optimizing prompts for better model outputs, and adapting to user preferences. Noteworthy papers include DiaTool-DPO, which achieves state-of-the-art performance in information gathering and tool call rejection. GREATERPROMPT is also notable for its unified and customizable framework for prompt optimization, making it accessible to non-expert users.

Sources

DiaTool-DPO: Multi-Turn Direct Preference Optimization for Tool-Augmented Large Language Models

GREATERPROMPT: A Unified, Customizable, and High-Performing Open-Source Toolkit for Prompt Optimization

ADAPT: Actively Discovering and Adapting to Preferences for any Task

DDPT: Diffusion-Driven Prompt Tuning for Large Language Model Code Generation

AutoPDL: Automatic Prompt Optimization for LLM Agents

Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors

ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models

Built with on top of