Granular Privacy Guarantees in Machine Learning Inference

The recent developments in privacy-preserving machine learning have seen a shift towards more nuanced and granular privacy guarantees, particularly during the inference stage. Researchers are now focusing on creating frameworks that protect user data not just during training, but also when interacting with models post-training. This includes the introduction of new privacy notions like Inference Privacy (IP) that offer rigorous guarantees for user data during inference, contrasting with traditional Local Differential Privacy (LDP). Mechanisms such as input and output perturbations are being explored to allow users to customize their privacy-utility trade-offs. Additionally, there is a growing emphasis on assessing and mitigating privacy risks within intermediate layers of deep learning models, moving beyond traditional output-focused assessments. Novel approaches leveraging Degrees of Freedom (DoF) and Jacobian matrix ranks are being proposed to systematically measure privacy risks at various model layers. Symbolic methods for computing information-theoretic measures of leakage in probabilistic programs are also gaining traction, providing a more comprehensive understanding of information flow in data-intensive systems. Finally, advancements in privacy auditing methods are addressing the gap between theoretical and empirical privacy guarantees, with new adversarial sample-based approaches offering tighter auditing in final model scenarios.

Noteworthy papers include one that introduces Inference Privacy, providing a systematic framework for user data protection during inference, and another that proposes a novel approach to measuring privacy risks in deep computer vision models using DoF and Jacobian sensitivity analysis.

Sources

Inference Privacy: Properties and Mechanisms

Intermediate Outputs Are More Sensitive Than You Think

Symbolic Quantitative Information Flow for Probabilistic Programs

Adversarial Sample-Based Approach for Tighter Privacy Auditing in Final Model-Only Scenarios

Built with on top of