Enhancing Fairness and Robustness in NLP and ML Models

The recent developments in the research area of natural language processing and machine learning have shown significant advancements in addressing biases and improving the robustness of models. There is a growing focus on detecting and mitigating biases in various stages of the machine learning pipeline, from data generation to model evaluation. Specifically, the field is witnessing innovative approaches to handle ambiguous and unanswerable queries in text-to-SQL systems, which are crucial for practical applications. Additionally, there is a strong emphasis on ensuring fairness in vision-language models, with new methodologies being proposed to better detect and mitigate biases. The integration of future conversation modeling to enhance the ability of large language models to ask clarifying questions is another notable trend, aiming to improve user interaction and satisfaction. These advancements collectively push the boundaries of current technologies, making them more reliable and equitable for real-world use.

Sources

Hypothesis-only Biases in Large Language Model-Elicited Natural Language Inference

Watching the Watchers: Exposing Gender Disparities in Machine Translation Quality Estimation

PRACTIQ: A Practical Conversational Text-to-SQL dataset with Ambiguous and Unanswerable Queries

Mapping Bias in Vision Language Models: Signposts, Pitfalls, and the Road Ahead

Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying Questions

Built with on top of