The field of natural language processing is moving towards improving multilingual capabilities of language models. Recent research has focused on addressing the challenges of achieving attributability in multilingual table-to-text generation, analyzing geopolitical bias in large language models, and developing methods for multilingual preference alignment. The use of intermediate planning techniques, such as question-answer blueprints, has shown promise in improving attributability, but also highlights the difficulties of translating these techniques to low-resource languages. Geopolitical bias has been found to be a significant issue in large language models, with models often reflecting the biases of their developers. To address this, researchers are developing methods to evaluate and mitigate bias in language models. Noteworthy papers include:
- Mapping Geopolitical Bias in 11 Large Language Models, which systematically analyzes geopolitical bias across prominent large language models.
- CONGRAD: Conflicting Gradient Filtering for Multilingual Preference Alignment, which proposes a scalable and effective filtering method for multilingual preference alignment.
- Subasa -- Adapting Language Models for Low-resourced Offensive Language Detection in Sinhala, which introduces fine-tuning strategies for offensive language detection in low-resource languages.