Advances in Retrieval-Augmented Generation

The field of natural language processing is witnessing significant advancements in retrieval-augmented generation (RAG) techniques. Recent studies have focused on enhancing the performance of large language models (LLMs) by incorporating external knowledge and improving their ability to reason and generate accurate responses. One notable direction is the development of dynamic retrieval methods that can adapt to different queries and contexts, such as the Dynamic Alpha Tuning (DAT) approach and the Dynamic Parametric Retrieval Augmented Generation (DyPRAG) framework. Another area of research is the improvement of RAG systems through the use of multi-agent frameworks, automated decision rule optimization, and memory-aware retrieval mechanisms. Additionally, there is a growing interest in evaluating the limitations of query performance prediction and exploring new methods for training utility-based retrievers. Noteworthy papers in this area include PRAISE, which presents a pipeline-based approach for conversational question answering, and MARO, which proposes a multi-agent framework with automated decision rule optimization for cross-domain misinformation detection. Overall, the field is moving towards more advanced and efficient RAG techniques that can effectively leverage external knowledge and improve the accuracy of LLMs.

Sources

Debate-Driven Multi-Agent LLMs for Phishing Email Detection

Preference-based Learning with Retrieval Augmented Generation for Conversational Question Answering

DAT: Dynamic Alpha Tuning for Hybrid Retrieval in Retrieval-Augmented Generation

A Multi-Agent Framework with Automated Decision Rule Optimization for Cross-Domain Misinformation Detection

Memory-Aware and Uncertainty-Guided Retrieval for Multi-Hop Question Answering

An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering

Better wit than wealth: Dynamic Parametric Retrieval Augmented Generation for Test-time Knowledge Enhancement

Combining Query Performance Predictors: A Reproducibility Study

Contradiction Detection in RAG Systems: Evaluating LLMs as Context Validators for Improved Information Consistency

Insight-RAG: Enhancing LLMs with Insight-Driven Augmentation

Training a Utility-based Retriever Through Shared Context Attribution for Retrieval-Augmented Language Models

Self-Routing RAG: Binding Selective Retrieval with Knowledge Verbalization

Uncovering the Limitations of Query Performance Prediction: Failures, Insights, and Implications for Selective Query Processing

Prompt-Reverse Inconsistency: LLM Self-Inconsistency Beyond Generative Randomness and Prompt Paraphrasing

Scaling Test-Time Inference with Policy-Optimized, Dynamic Retrieval-Augmented Generation via KV Caching and Decoding

CoRAG: Collaborative Retrieval-Augmented Generation

One Pic is All it Takes: Poisoning Visual Document Retrieval Augmented Generation with a Single Image

Adapting Large Language Models for Multi-Domain Retrieval-Augmented-Generation

Built with on top of