Advances in Mitigating Bias in Large Language Models

The field of large language models (LLMs) is moving towards addressing the critical issue of bias in these models. Recent research has highlighted the prevalence of bias in LLMs, which can lead to misdiagnosis, inappropriate treatment, and exacerbation of health inequities. To combat this, researchers are developing innovative methods to detect and mitigate bias in LLMs. These methods include creating high-quality benchmarking datasets, proposing cognitive debiasing approaches, and investigating stereotype-aware unfairness in LLM-based recommendations. Noteworthy papers in this area include 'Bias in Large Language Models Across Clinical Applications: A Systematic Review', which reveals pervasive bias across various LLMs and clinical applications, and 'Cognitive Debiasing Large Language Models for Decision-Making', which proposes a self-debiasing method to enhance the reliability of LLMs. Additionally, 'Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social Psychological Underpinnings' and 'Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations' are also making significant contributions to this field.

Sources

TheBlueScrubs-v1, a comprehensive curated medical dataset derived from the internet

Bias in Large Language Models Across Clinical Applications: A Systematic Review

Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social Psychological Underpinnings

Cognitive Debiasing Large Language Models for Decision-Making

AiReview: An Open Platform for Accelerating Systematic Reviews with LLMs

Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations

Investigating Popularity Bias Amplification in Recommender Systems Employed in the Entertainment Domain

Leveraging Large Language Models for Cost-Effective, Multilingual Depression Detection and Severity Assessment

Dr Web: a modern, query-based web data retrieval engine

Unequal Opportunities: Examining the Bias in Geographical Recommendations by Large Language Models

Why is Normalization Necessary for Linear Recommenders?

On the Effectiveness and Generalization of Race Representations for Debiasing High-Stakes Decisions

Built with on top of