Advancing Fairness, Interpretability, and Privacy in Machine Learning

The recent advancements in the research area predominantly revolve around enhancing the fairness, interpretability, and robustness of machine learning models, particularly in the context of image classification and decision-making processes. A significant focus has been on mitigating biases within models, with innovative approaches such as the Texture Association Value (TAV) for quantifying texture bias and the directional predictability amplification (DPA) metric for measuring directional bias amplification in balanced datasets. Additionally, there is a growing emphasis on integrating fairness into probabilistic binary classification through methods like $\varepsilon_p$-Equalized ROC, which ensures fairness across protected groups regardless of the classification threshold. Privacy concerns are also being addressed through the use of generalization techniques and differential privacy, with studies exploring the interplay among privacy, utility, and fairness. Furthermore, the field is witnessing advancements in explainable AI, with the extraction of PAC decision trees from black box classifiers to provide theoretical guarantees of fidelity. These developments collectively aim to create more trustworthy and equitable AI systems, ensuring they align with human values and societal needs. Notably, the application of artificial neural networks to enhance human visual learning performance stands out as a pioneering effort in augmenting human capabilities through AI.

Sources

L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection And Enhancement

Extracting PAC Decision Trees from Black Box Binary Classifiers: The Gender Bias Study Case on BERT-based Language Models

Err on the Side of Texture: Texture Bias on Real Data

Linear Programming based Approximation to Individually Fair k-Clustering with Outliers

Making Bias Amplification in Balanced Datasets Directional and Interpretable

The Impact of Generalization Techniques on the Interplay Among Privacy, Utility, and Fairness in Image Classification

Fairness Shields: Safeguarding against Biased Decision Makers

FROC: Building Fair ROC from a Trained Classifier

Built with on top of