The recent developments in the field of distributed and federated learning have been significantly focused on enhancing privacy, security, and efficiency in the face of non-convex optimization problems, non-iid data distributions, and adversarial threats. A notable trend is the advancement of privacy-preserving techniques, such as differential privacy mechanisms, which are being integrated into distributed optimization algorithms to protect sensitive data without compromising the utility of the models. Innovations in this area include the development of algorithms that ensure differential privacy over time-varying networks and the introduction of lossless privacy-preserving aggregation methods that maintain model accuracy while safeguarding against data leaks.
Another critical area of progress is the defense against adversarial attacks, particularly in decentralized federated learning environments. Novel defense mechanisms, such as gradient purification and resilient peer-to-peer learning techniques, have been proposed to mitigate the effects of poisoning attacks and ensure the integrity of the learning process. These methods aim to retain the beneficial aspects of potentially malicious contributions while enhancing the overall model accuracy.
Furthermore, the field has seen the emergence of new strategies to address the challenges of backdoor attacks in split learning and vertical federated learning. Techniques like SafeSplit and cooperative decentralized backdoor attacks demonstrate the ongoing efforts to secure distributed learning frameworks against sophisticated threats, ensuring the robustness and reliability of collaborative learning systems.
In the realm of distributed aggregative optimization, ensuring the truthfulness of agents in a fully decentralized setting has become a pivotal concern. Recent work has introduced algorithms that guarantee truthfulness and convergence performance, addressing the issue of deceptive information sharing among agents.
Noteworthy Papers:
- Privacy-Preserving Distributed Online Mirror Descent for Nonconvex Optimization: Introduces a novel algorithm that ensures differential privacy and sublinear regret growth for nonconvex optimization over time-varying networks.
- Lossless Privacy-Preserving Aggregation for Decentralized Federated Learning: Proposes LPPA, a method that enhances gradient protection without sacrificing model accuracy, significantly outperforming traditional noise addition techniques.
- Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning: Develops GPD, a defense mechanism that mitigates poisoning attacks while retaining accuracy benefits from malicious gradients.
- Resilient Peer-to-peer Learning based on Adaptive Aggregation: Introduces a resilient aggregation technique for peer-to-peer learning, demonstrating improved accuracy against various attack models.
- SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning: Presents the first defense mechanism specifically designed for split learning, effectively mitigating client-side backdoor attacks.
- Differentially Private Gradient-Tracking-Based Distributed Stochastic Optimization over Directed Graphs: Proposes a new algorithm that ensures differential privacy and achieves polynomial or exponential convergence rates for distributed stochastic optimization.
- Ensuring Truthfulness in Distributed Aggregative Optimization: Introduces a novel algorithm that ensures truthfulness and convergence in distributed aggregative optimization, addressing the challenge of deceptive information sharing.