The recent advancements in the field of fairness in machine learning have primarily focused on mitigating biases related to sensitive attributes such as race, gender, and socioeconomic status. Researchers are increasingly adopting causal modeling frameworks to disentangle the influence of these attributes from clinically or operationally relevant features, thereby ensuring that predictions are fair and unbiased. A notable trend is the integration of adversarial perturbation techniques and novel fairness criteria, which aim to suppress bias-inducing information while maintaining predictive accuracy. Additionally, the use of auxiliary variables and exogenous causal reasoning has been explored to achieve counterfactual fairness, ensuring that model predictions remain consistent across variations of sensitive attributes. In predictive process monitoring, the emphasis has shifted towards achieving group fairness through independence, with new metrics and composite loss functions being proposed to balance fairness and predictive performance. Furthermore, studies on the impact of healthcare access disparities on model performance have highlighted the need for equitable data collection and algorithmic design to ensure reliable and fair predictions for all patient groups. Overall, the field is progressing towards more nuanced and robust fairness measures, with a strong emphasis on causal inference and equitable data practices.
Noteworthy papers include one that introduces a causal modeling framework with a novel fairness criterion to mitigate bias in medical image analysis, and another that leverages exogenous causal reasoning to achieve counterfactual fairness by incorporating auxiliary variables.