Advancements in Federated Learning and Differential Privacy

The field of federated learning and differential privacy is rapidly evolving, with a focus on developing innovative methods to protect data privacy and security in distributed learning settings. Recent research has explored the intersection of federated learning and prompt learning, resulting in frameworks that evaluate and improve the performance of federated prompt learning algorithms. Additionally, there is a growing interest in intelligent distribution of privacy budgets in differentially private text rewriting, which has led to the development of toolkits that allocate privacy budgets to constituent tokens in text documents. The concept of fairness in federated learning has also gained significant attention, with proposals for frameworks that investigate the minimum accuracy lost for enforcing global and local fairness in multi-class settings. Furthermore, researchers have introduced novel approaches to enhance federated learning through secure cluster-weighted client aggregation and anonymous adaptive clustering, which address issues related to data heterogeneity and privacy concerns. Other notable advancements include the development of differentially private stochastic gradient descent algorithms with dynamic clipping, privacy-preserved decentralized stochastic learning algorithms, and methods for detecting backdoor attacks in federated learning. Noteworthy papers include FLIP, which introduces a comprehensive framework for evaluating federated prompt learning algorithms, and Spend Your Budget Wisely, which proposes a novel approach to intelligent distribution of privacy budgets in differentially private text rewriting. Overall, these advancements demonstrate the field's commitment to developing robust and privacy-preserving distributed learning methods.

Sources

FLIP: Towards Comprehensive and Reliable Evaluation of Federated Prompt Learning

Spend Your Budget Wisely: Towards an Intelligent Distribution of the Privacy Budget in Differentially Private Text Rewriting

The Cost of Local and Global Fairness in Federated Learning

Enhancing Federated Learning Through Secure Cluster-Weighted Client Aggregation

DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation

FedCAPrivacy: Privacy-Preserving Heterogeneous Federated Learning with Anonymous Adaptive Clustering

Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks

Buffer is All You Need: Defending Federated Learning against Backdoor Attacks under Non-iids via Buffering

PDSL: Privacy-Preserved Decentralized Stochastic Learning with Heterogeneous Data Distribution

Privacy Preservation for Statistical Input in Dynamical Systems

Initial State Privacy of Nonlinear Systems on Riemannian Manifolds

Sample-Optimal Private Regression in Polynomial Time

Backdoor Detection through Replicated Execution of Outsourced Training

Federated Learning for Cross-Domain Data Privacy: A Distributed Approach to Secure Collaboration

Forward Learning with Differential Privacy

Global Intervention and Distillation for Federated Out-of-Distribution Generalization

Exploring Personalized Federated Learning Architectures for Violence Detection in Surveillance Videos

Explainable post-training bias mitigation with distribution-based fairness metrics

On Model Protection in Federated Learning against Eavesdropping Attacks

Secure Generalization through Stochastic Bidirectional Parameter Updates Using Dual-Gradient Mechanism

Built with on top of