Graph Neural Network Security

Report on Current Developments in Graph Neural Network Security

General Direction of the Field

The recent advancements in the field of Graph Neural Networks (GNNs) security have been predominantly focused on addressing two primary threats: backdoor attacks and membership inference attacks. These developments are crucial as GNNs continue to be integrated into critical real-world applications, necessitating robust security measures to ensure their reliability and integrity.

Backdoor Attacks Mitigation: The field is witnessing a significant shift towards developing model-agnostic and privacy-preserving defenses against backdoor attacks. Innovations are being driven by the need to protect GNNs from malicious triggers that can manipulate model predictions without requiring access to the model's internal architecture or retraining. The emphasis is on creating methods that can effectively identify and neutralize backdoor triggers while maintaining the model's performance on legitimate tasks. This approach is particularly important for scenarios where third-party models are employed, ensuring that business owners can shield their services from potential backdoor threats without compromising on privacy or performance.

Membership Inference Attacks Defense: Another critical area of focus is the defense against membership inference attacks, which aim to determine whether a specific data sample was part of a model's training dataset. Recent work has highlighted the vulnerability of over-parameterized models to such attacks and has introduced adaptive sparsity tuning techniques to mitigate this risk. The goal is to balance the trade-off between model utility and privacy by selectively penalizing parameters that contribute significantly to privacy leakage. This adaptive approach ensures that the model remains robust against privacy attacks while maintaining its overall performance.

Noteworthy Innovations

  • GCleaner: Introduces a novel backdoor mitigation method for GNNs, effectively reducing the backdoor attack success rate while preserving model performance.
  • GraphProt: Proposes a model-agnostic defense against backdoor attacks, leveraging subgraph information to mitigate trigger effects without requiring model access.
  • PAST (Privacy-aware Sparsity Tuning): Achieves state-of-the-art results in balancing privacy and utility by adaptively tuning sparsity in parameters that pose high privacy risks.

These advancements collectively push the boundaries of GNN security, offering robust solutions to emerging threats and paving the way for more secure and reliable applications of GNNs in the future.

Sources

"No Matter What You Do!": Mitigating Backdoor Attacks in Graph Neural Networks

LinkThief: Combining Generalized Structure Knowledge with Node Similarity for Link Stealing Attack against GNN

Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models

Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning

Provable Privacy Attacks on Trained Shallow Neural Networks

Built with on top of