Neural Network Security and Privacy: Advancing Adversarial Robustness and Data Protection

The recent developments in the field of neural network security and privacy have seen significant advancements in understanding and countering various adversarial attacks. The research community is increasingly focusing on model inversion attacks, where sensitive training data can be extracted from well-trained models, highlighting the vulnerability of neural networks. Innovations in cryptanalytic extraction methods for non-fully connected deep neural networks have also been introduced, demonstrating high fidelity in model replication. Additionally, the use of neural networks in cryptographic schemes is being explored, offering potential for creating dynamic and computationally efficient encryption methods. The field is also witnessing advancements in side-channel attacks, with practical implementations showing high accuracy in reconstructing sensitive data from physical devices like 3D printers. Furthermore, the introduction of multimodal backdoor learning toolkits and benchmarks is streamlining the evaluation of backdoor defense methods, contributing to more systematic and standardized research practices. The effectiveness of malware detection in Linux distributions is being critically examined, revealing the inadequacies of current open-source tools. Theoretical analysis and practical experiments are also shedding light on the leakage of training data from graph neural networks, prompting new research directions in data privacy. Overall, the field is moving towards more robust and comprehensive strategies for securing neural networks and protecting sensitive data, with a particular emphasis on adversarial robustness and privacy-preserving techniques.

Sources

Model Inversion Attacks: A Survey of Approaches and Countermeasures

A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel Attacks

Transformers -- Messages in Disguise

Practitioner Paper: Decoding Intellectual Property: Acoustic and Magnetic Side-channel Attack on a 3D Printer

BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation

A Study of Malware Prevention in Linux Distributions

Stealing Training Graphs from Graph Neural Networks

Countering Backdoor Attacks in Image Recognition: A Survey and Evaluation of Mitigation Strategies

Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization

libcll: an Extendable Python Toolkit for Complementary-Label Learning

Combinational Backdoor Attack against Customized Text-to-Image Models

Trojan Cleansing with Neural Collapse

Bounding-box Watermarking: Defense against Model Extraction Attacks on Object Detectors

AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection

Whack-a-Chip: The Futility of Hardware-Centric Export Controls

Built with on top of