Advancements in Graph Neural Networks

The field of Graph Neural Networks (GNNs) is witnessing significant developments, with a focus on improving fairness, robustness, and reliability. Researchers are exploring innovative methods to mitigate biases in GNNs, including data sparsification, feature modification, and synthetic data augmentation. Another area of focus is on certified robustness, with techniques such as randomized smoothing being refined to achieve both high clean accuracy and certifiably robust accuracy. Furthermore, advancements in graph anomaly detection are being made, with frameworks that integrate statistical risk control and conformal risk control mechanisms. These developments are paving the way for more equitable and reliable AI systems. Noteworthy papers include:

  • A paper that proposes a framework to achieve high clean accuracy and certifiably robust accuracy for GNNs, significantly improving clean accuracy and certified robustness.
  • A paper that introduces a framework for conformal risk control in supervised graph anomaly detection, providing theoretically guaranteed bounds for false negative and false positive rates.

Sources

Comparing Methods for Bias Mitigation in Graph Neural Networks

AuditVotes: A Framework Towards More Deployable Certified Robustness for Graph Neural Networks

CRC-SGAD: Conformal Risk Control for Supervised Graph Anomaly Detection

Bridging the Theoretical Gap in Randomized Smoothing

Built with on top of