The field of Federated Learning (FL) is rapidly evolving, with a significant focus on enhancing security and privacy measures against adversarial threats. Recent developments have highlighted the vulnerability of FL systems to sophisticated attacks, including temporal adversarial attacks, malicious unlearning attacks, and backdoor attacks, especially in non-IID environments. These challenges have spurred innovative research into defense mechanisms that are robust against such threats, ensuring the integrity and reliability of FL models. Notably, advancements in personalized federated learning (PFL) and vertical federated learning (VFL) have opened new avenues for addressing data heterogeneity and privacy concerns, with novel approaches to unlearning and defense strategies being proposed.
- Temporal Analysis of Adversarial Attacks in Federated Learning: This study underscores the impact of temporal attacks on FL models, advocating for the development of more robust defense mechanisms.
- FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks: Introduces a novel attack that exploits the unlearning process, highlighting the need for resilient defense strategies.
- FedCLEAN: Byzantine Defense by Clustering Errors of Activation Maps in Non-IID Federated Learning Environments: Presents a groundbreaking defense mechanism effective in non-IID settings, showcasing its robustness against Byzantine attackers.
- Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning: Reveals the potential immunity of PFL to traditional backdoor attacks, proposing a novel attack method using natural data features as triggers.
- Unlearning Clients, Features and Samples in Vertical Federated Learning: Explores unlearning in VFL, introducing efficient methods for unlearning clients, features, and samples without compromising model performance.