The field of privacy-preserving machine learning is rapidly evolving, with a focus on developing innovative solutions to balance privacy protection with data utility. Recent developments have centered around addressing the vulnerabilities of federated learning, including gradient inversion attacks and poisoning attacks. Researchers are exploring new defense mechanisms, such as learnable data perturbation and generative adversarial networks, to protect sensitive data and prevent malicious attacks. These advancements have the potential to significantly improve the security and reliability of federated learning systems, enabling wider adoption in healthcare and other sensitive domains. Noteworthy papers include:
- Defending Against Gradient Inversion Attacks for Biomedical Images via Learnable Data Perturbation, which proposes a generalizable defense for healthcare data.
- Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense Framework, which offers a scalable and adaptive defense framework against poisoning attacks.
- TS-Inverse: A Gradient Inversion Attack Tailored for Federated Time Series Forecasting Models, which improves the inversion of time series data through a novel gradient inversion attack.
- Generator Cost Coefficients Inference Attack via Exploitation of Locational Marginal Prices in Smart Grid, which reveals vulnerabilities in smart grid systems and proposes a method for inferring generator cost functions.