Balancing Privacy and Robustness in Machine Learning Models

The recent advancements in the research area demonstrate a strong focus on enhancing the robustness and privacy of machine learning models against various adversarial threats. A significant trend is the development of novel techniques that balance privacy protection with model utility, addressing the dual challenges of adversarial robustness and data leakage. Key innovations include methods that leverage data poisoning to defend against model inversion attacks, frameworks that optimize latent space for adversarial purification, and systems that detect insider threats with high precision by considering contextual anomalies. Additionally, there is a notable shift towards domain-agnostic approaches for black-box model attribute reverse engineering and robust query-driven cardinality estimation under out-of-distribution conditions. These developments collectively push the boundaries of model security, emphasizing the need for adaptive and retraining-free defense mechanisms. Notably, the work on adversarial purification via latent space optimization and the high-precision insider threat detection system stand out for their innovative approaches and practical applicability.

Sources

DeMem: Privacy-Enhanced Robust Adversarial Learning via De-Memorization

DREAM: Domain-agnostic Reverse Engineering Attributes of Black-box Model

CardOOD: Robust Query-driven Cardinality Estimation under Out-of-Distribution

Adversarial Transferability in Deep Denoising Models: Theoretical Insights and Robustness Enhancement via Out-of-Distribution Typical Set Sampling

Facade: High-Precision Insider Threat Detection Using Deep Contextual Anomaly Detection

Defending Against Neural Network Model Inversion Attacks via Data Poisoning

Adversarial Purification by Consistency-aware Latent Space Optimization on Data Manifolds

Training Data Reconstruction: Privacy due to Uncertainty?

Deep Learning Model Security: Threats and Defenses

On the Generation and Removal of Speaker Adversarial Perturbation for Voice-Privacy Protection

A Semi Black-Box Adversarial Bit-Flip Attack with Limited DNN Model Information

Built with on top of