Federated Learning Security

Report on Current Developments in Federated Learning Security

General Direction of the Field

The field of federated learning (FL) is rapidly evolving, particularly in the domain of security and robustness against adversarial attacks. Recent research has focused on developing innovative methods to protect FL systems from backdoor attacks, which pose a significant threat by embedding malicious triggers into the model during training. The general direction of the field is moving towards more sophisticated and stealthy attack strategies, as well as robust defense mechanisms that can operate effectively under non-independent and identically distributed (non-i.i.d.) data conditions.

One of the key innovations in this area is the development of backdoor attacks that are more difficult to detect and mitigate. These attacks leverage advanced techniques such as steganography and neuron-based triggers to create backdoors that are both effective and covert. At the same time, researchers are also exploring novel defense mechanisms that can identify and neutralize these backdoors without relying on the availability of clean data. These defenses often incorporate layered aggregation, optimal transport-based model fusion, and participant-wise anomaly detection to enhance the robustness of FL systems.

Another important trend is the adaptation of defense strategies to different types of federated learning, such as vertical federated learning (VFL), which deals with vertically partitioned data. This adaptation is crucial as backdoor attacks in VFL can exploit the unique characteristics of the data distribution, rendering traditional defenses ineffective.

Overall, the field is progressing towards a more comprehensive understanding of the vulnerabilities in FL and the development of multi-faceted defense strategies that can address these challenges in a variety of scenarios.

Noteworthy Papers

  • SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning: Introduces a novel backdoor attack that leverages steganography and a new gradient updating mechanism, making it more robust and difficult to detect.

  • Celtibero: Robust Layered Aggregation for Federated Learning: Proposes a layered aggregation defense that significantly enhances robustness against poisoning attacks, outperforming existing methods under non-i.i.d. conditions.

  • Fusing Pruned and Backdoored Models: Optimal Transport-based Data-free Backdoor Mitigation: Presents a data-free defense method using optimal transport-based model fusion, successfully defending against multiple backdoor attacks across benchmark datasets.

  • VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification: Introduces the first backdoor defense specifically designed for vertical federated learning, demonstrating effective mitigation of backdoor attacks in VFL scenarios.

Sources

SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning

Sample-Independent Federated Learning Backdoor Attack

Celtibero: Robust Layered Aggregation for Federated Learning

Fusing Pruned and Backdoored Models: Optimal Transport-based Data-free Backdoor Mitigation

VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification