The field of federated learning and differential privacy is rapidly evolving, with a focus on developing innovative methods to protect data privacy and security in distributed learning settings. Recent research has explored the intersection of federated learning and prompt learning, resulting in frameworks that evaluate and improve the performance of federated prompt learning algorithms. Additionally, there is a growing interest in intelligent distribution of privacy budgets in differentially private text rewriting, which has led to the development of toolkits that allocate privacy budgets to constituent tokens in text documents. The concept of fairness in federated learning has also gained significant attention, with proposals for frameworks that investigate the minimum accuracy lost for enforcing global and local fairness in multi-class settings. Furthermore, researchers have introduced novel approaches to enhance federated learning through secure cluster-weighted client aggregation and anonymous adaptive clustering, which address issues related to data heterogeneity and privacy concerns. Other notable advancements include the development of differentially private stochastic gradient descent algorithms with dynamic clipping, privacy-preserved decentralized stochastic learning algorithms, and methods for detecting backdoor attacks in federated learning. Noteworthy papers include FLIP, which introduces a comprehensive framework for evaluating federated prompt learning algorithms, and Spend Your Budget Wisely, which proposes a novel approach to intelligent distribution of privacy budgets in differentially private text rewriting. Overall, these advancements demonstrate the field's commitment to developing robust and privacy-preserving distributed learning methods.
Advancements in Federated Learning and Differential Privacy
Sources
Spend Your Budget Wisely: Towards an Intelligent Distribution of the Privacy Budget in Differentially Private Text Rewriting
DC-SGD: Differentially Private SGD with Dynamic Clipping through Gradient Norm Distribution Estimation
FedCAPrivacy: Privacy-Preserving Heterogeneous Federated Learning with Anonymous Adaptive Clustering
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks
Buffer is All You Need: Defending Federated Learning against Backdoor Attacks under Non-iids via Buffering