Advancing Fairness and Robustness in Machine Learning

The recent developments in the research area of machine learning fairness and robustness have seen significant advancements, particularly in addressing bias and ensuring equitable performance across different subpopulations. A notable trend is the shift towards developing methods that not only mitigate bias but also enhance the robustness of models against adversarial attacks, ensuring that fairness is maintained across various demographic groups. This is evident in the introduction of novel techniques such as Fair Distillation, which leverages biased 'teacher' models to guide the training of a fair 'student' model, and Learning Fair Robustness via Domain Mixup, which uses mixup to balance robustness across different classes. Additionally, there is a growing emphasis on the importance of evaluating bias mitigation techniques under diverse conditions to ensure a fair comparison, as highlighted by the work on comparing bias mitigation algorithms. These innovations collectively push the field towards more equitable and robust machine learning models, applicable across a wide range of domains, including medical imaging and environmental health.

Sources

SureMap: Simultaneous Mean Estimation for Single-Task and Multi-Task Disaggregated Evaluation

Combining Machine Learning Defenses without Conflicts

Towards a Fairer Non-negative Matrix Factorization

Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML

Feature Selection Approaches for Newborn Birthweight Prediction in Multiple Linear Regression Models

Fair Distillation: Teaching Fairness from Biased Teachers in Medical Imaging

Learning Fair Robustness via Domain Mixup

Built with on top of