Advances in Fairness and Robustness in AI

The field of AI is moving towards addressing the inherent tension between robustness and fairness in machine learning models. Researchers are exploring innovative approaches to reconcile these two aspects, particularly in the context of data corruption and demographic disparities. One notable direction is the integration of fairness-oriented strategies into existing robust learning algorithms, aiming to deliver equalized performance across demographic groups. Another area of focus is the development of uncertainty quantification methods to evaluate predictive fairness in high-stakes applications such as healthcare. Furthermore, there is a growing emphasis on addressing the challenges of intellectual property and ethical responsibilities in the context of generative AI models. Noteworthy papers in this regard include: FairSAM, which proposes a novel framework for fair and robust image classification. Conformal uncertainty quantification to evaluate predictive fairness of foundation AI models for skin lesion classification. Putting GenAI on Notice, which argues that website terms of service prohibiting scraping are legally enforceable when a bot scrapes pages including those terms.

Sources

FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware Minimization

Conformal uncertainty quantification to evaluate predictive fairness of foundation AI model for skin lesion classes across patient demographics

Imbalanced malware classification: an approach based on dynamic classifier selection

Putting GenAI on Notice: GenAI Exceptionalism and Contract Law

Unfair Learning: GenAI Exceptionalism and Copyright Law

Who Owns the Output? Bridging Law and Technology in LLMs Attribution

The Author Is Sovereign: A Manifesto for Ethical Copyright in the Age of AI

Built with on top of