Advances in Human-AI Interaction and Large Language Model Capabilities

The field of human-AI interaction is rapidly evolving, with a focus on improving the capabilities of large language models (LLMs) to understand and respond to user queries. Recent research has highlighted the importance of prompt rewriting, metacognition, and uncertainty quantification in enhancing the performance of LLMs. Studies have shown that rephrasing ineffective prompts can elicit better responses from conversational systems, while preserving the user's original intent. Additionally, the development of neural symbolic frameworks and attention-based methods has improved the ability of LLMs to model interactions and quantify uncertainty. Noteworthy papers in this area include the introduction of SUNAR, a novel approach to neighborhood-aware retrieval for complex question answering, and the development of ECLAIR, a multi-agent framework for interactive disambiguation. Furthermore, research on preference-based learning and retrieval-augmented generation has shown promise in improving conversational question answering. Overall, the field is moving towards more sophisticated and human-like interactions between users and LLMs.

Sources

Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation

Metacognition in Content-Centric Computational Cognitive C4 Modeling

An Empirical Study of the Role of Incompleteness and Ambiguity in Interactions with Large Language Models

SUNAR: Semantic Uncertainty based Neighborhood Aware Retrieval for Complex QA

Language Model Uncertainty Quantification with Attention Chain

Agent-Initiated Interaction in Phone UI Automation

ECLAIR: Enhanced Clarification for Interactive Responses in an Enterprise AI Assistant

A Measure Based Generalizable Approach to Understandability

Preference-based Learning with Retrieval Augmented Generation for Conversational Question Answering

Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions

QuestBench: Can LLMs ask the right question to acquire information in reasoning tasks?

Built with on top of