Mitigating Hallucinations in Large Language Models

The field of large language models is witnessing significant developments in addressing the challenge of hallucinations, which refers to the generation of non-factual or misleading content. Researchers are exploring innovative approaches to mitigate hallucinations, including the use of linguistic nuances, bounded input perturbations, and noise augmented fine-tuning. These methods aim to enhance the factual accuracy and robustness of large language models, making them more reliable and trustworthy. Noteworthy papers in this regard include The Illusionist's Prompt, which introduces a novel hallucination attack that incorporates linguistic nuances into adversarial queries, and Noise-Augmented Fine-Tuning, which leverages adaptive noise injection to enhance model robustness. Other notable works include TARAC, which proposes a temporal attention real-time accumulative connection method to mitigate hallucinations in large vision-language models, and HalluciNot, which presents a comprehensive system for detecting hallucinations in large language model outputs through context and common knowledge verification.

Sources

The Illusionist's Prompt: Exposing the Factual Vulnerabilities of Large Language Models with Linguistic Nuances

Noiser: Bounded Input Perturbations for Attributing Large Language Models

Noise Augmented Fine Tuning for Mitigating Hallucinations in Large Language Models

Hallucination Detection on a Budget: Efficient Bayesian Estimation of Semantic Entropy

A Unified Virtual Mixture-of-Experts Framework:Enhanced Inference and Hallucination Mitigation in Single-Model System

TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection

Hallucination Detection using Multi-View Attention Features

Feedback-Enhanced Hallucination-Resistant Vision-Language Model for Real-Time Scene Understanding

Hybrid Retrieval for Hallucination Mitigation in Large Language Models: A Comparative Analysis

Don't Let It Hallucinate: Premise Verification via Retrieval-Augmented Logical Reasoning

HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification

Built with on top of