Adversarial Robustness and Privacy in Machine Learning

The field of machine learning is moving towards addressing critical security and privacy challenges. Researchers are exploring innovative methods to enhance adversarial robustness, including the development of novel training frameworks and mechanisms to maintain stable prototypes. Furthermore, there is a growing focus on identifying and mitigating privacy vulnerabilities, such as attribute inference attacks and disparate privacy risks. Noteworthy papers in this area include:

  • A Study on Adversarial Robustness of Discriminative Prototypical Learning, which proposes a novel adversarial training framework named Adversarial Deep Positive-Negative Prototypes (Adv-DPNP) that integrates discriminative prototype-based learning with adversarial training.
  • Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses, which introduces a novel inference attack called the disparity inference attack and a targeted variation of the attribute inference attack that can identify and exploit vulnerable subsets of the training data.

Sources

SLACK: Attacking LiDAR-based SLAM with Adversarial Point Injections

A Study on Adversarial Robustness of Discriminative Prototypical Learning

Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses

Clustering and novel class recognition: evaluating bioacoustic deep learning feature extractors

Built with on top of