Multilingual Advances in Language Models

The field of natural language processing is moving towards improving multilingual capabilities of language models. Recent research has focused on addressing the challenges of achieving attributability in multilingual table-to-text generation, analyzing geopolitical bias in large language models, and developing methods for multilingual preference alignment. The use of intermediate planning techniques, such as question-answer blueprints, has shown promise in improving attributability, but also highlights the difficulties of translating these techniques to low-resource languages. Geopolitical bias has been found to be a significant issue in large language models, with models often reflecting the biases of their developers. To address this, researchers are developing methods to evaluate and mitigate bias in language models. Noteworthy papers include:

  • Mapping Geopolitical Bias in 11 Large Language Models, which systematically analyzes geopolitical bias across prominent large language models.
  • CONGRAD: Conflicting Gradient Filtering for Multilingual Preference Alignment, which proposes a scalable and effective filtering method for multilingual preference alignment.
  • Subasa -- Adapting Language Models for Low-resourced Offensive Language Detection in Sinhala, which introduces fine-tuning strategies for offensive language detection in low-resource languages.

Sources

The Challenge of Achieving Attributability in Multilingual Table-to-Text Generation with Question-Answer Blueprints

Mapping Geopolitical Bias in 11 Large Language Models: A Bilingual, Dual-Framing Analysis of U.S.-China Tensions

CONGRAD:Conflicting Gradient Filtering for Multilingual Preference Alignment

Do Chinese models speak Chinese languages?

On the Consistency of Multilingual Context Utilization in Retrieval-Augmented Generation

Subasa -- Adapting Language Models for Low-resourced Offensive Language Detection in Sinhala

The Hidden Space of Safety: Understanding Preference-Tuned LLMs in Multilingual context

Built with on top of