Advancements in Bias Mitigation and Knowledge Extraction in NLP

The recent developments in the field of natural language processing (NLP) and large language models (LLMs) have been significantly focused on addressing biases and improving the quality of outputs. A notable trend is the exploration of multi-agent frameworks and multi-objective approaches to mitigate social and political biases in LLMs without compromising their performance. This includes innovative methods such as causal interventions on bias-related contents and structural debiasing architectures that tackle various types of dataset artifacts. Additionally, there is a growing interest in enhancing the interpretability and fairness of LLMs through systematic measurement and analysis of biases across different contexts and topics. Another area of advancement is in the identification and processing of multiword expressions (MWEs) and the extraction of knowledge from social conversations, which are crucial for improving machine translation and conversational agents. Furthermore, the development of new datasets and benchmarks, such as those for survey item linking and forecasting consistency checks, is facilitating more reliable and comprehensive evaluations of NLP systems.

Noteworthy Papers

  • Mitigating Social Bias in Large Language Models: Introduces a multi-agent framework that significantly reduces bias while maintaining accuracy.
  • Enriching Social Science Research via Survey Item Linking: Presents a novel approach to automatically link survey items, enhancing research comparability.
  • Multi-head attention debiasing and contrastive learning for mitigating Dataset Artifacts in Natural Language Inference: Offers a structural debiasing approach that improves handling of neutral relationships in NLI.
  • Unpacking Political Bias in Large Language Models: Systematically measures political biases in LLMs, revealing distinct response patterns across topics.
  • CoAM: Corpus of All-Type Multiword Expressions: Introduces a comprehensive dataset for MWE identification, enabling fine-grained error analysis.
  • Extracting triples from dialogues for conversational social agents: Addresses the challenge of extracting knowledge from social conversations, highlighting the complexity of conversational data.
  • Consistency Checks for Language Model Forecasters: Proposes a new consistency metric for evaluating LLM forecasters, providing an instantaneous benchmarking tool.

Sources

Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework

Enriching Social Science Research via Survey Item Linking

Multi-head attention debiasing and contrastive learning for mitigating Dataset Artifacts in Natural Language Inference

Unpacking Political Bias in Large Language Models: Insights Across Topic Polarization

CoAM: Corpus of All-Type Multiword Expressions

Extracting triples from dialogues for conversational social agents

Consistency Checks for Language Model Forecasters

Built with on top of