Enhancing Privacy, Robustness, and Efficiency in Federated Learning

The recent developments in the research area of federated learning (FL) have shown a significant shift towards enhancing privacy, robustness, and efficiency in distributed machine learning. A notable trend is the integration of explainability and fairness into FL frameworks, addressing the critical need for transparency and equitable treatment of all clients, especially those with poor data quality. Innovations like dynamic and explainable defense mechanisms against adversarial attacks are advancing the field towards more trustworthy artificial intelligence. Additionally, the incorporation of generative AI and explainable AI mechanisms in personalized FL frameworks is enhancing the adaptability and interpretability of models. The field is also witnessing advancements in reducing communication costs through novel data distillation techniques and improving model integrity through gradient stand-in methods. Furthermore, the optimization of federated Newton Learn algorithms and the introduction of self-contained compute-optimized implementations are bridging the gap between theoretical advancements and practical applications. These developments collectively underscore the maturing of FL methodologies, emphasizing privacy, efficiency, and robustness in real-world scenarios.

Noteworthy papers include 'RAB$^2$-DEF: Dynamic and explainable defense against adversarial attacks in Federated Learning to fair poor clients,' which introduces a novel defense mechanism that is dynamic, explainable, and fair to poor clients, and 'GAI-Enabled Explainable Personalized Federated Semi-Supervised Learning,' which proposes an innovative framework that integrates generative AI and explainable AI to address label scarcity and non-IID data challenges in FL.

Sources

RAB$^2$-DEF: Dynamic and explainable defense against adversarial attacks in Federated Learning to fair poor clients

Opacity Enforcement by Edit Functions Under Incomparable Observations

GAI-Enabled Explainable Personalized Federated Semi-Supervised Learning

DistDD: Distributed Data Distillation Aggregation through Gradient Matching

Gradients Stand-in for Defending Deep Leakage in Federated Learning

Unlocking FedNL: Self-Contained Compute-Optimized Implementation

The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses

Federated Learning in Practice: Reflections and Projections

Evaluating Federated Kolmogorov-Arnold Networks on Non-IID Data

A few-shot Label Unlearning in Vertical Federated Learning

Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning

FedCCRL: Federated Domain Generalization with Cross-Client Representation Learning

Backdoor Attack on Vertical Federated Graph Neural Network Learning

WPFed: Web-based Personalized Federation for Decentralized Systems

FOOGD: Federated Collaboration for Both Out-of-distribution Generalization and Detection

Why Go Full? Elevating Federated Learning Through Partial Network Updates

Age-of-Gradient Updates for Federated Learning over Random Access Channels

Federated Temporal Graph Clustering

TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic

Disentangling data distribution for Federated Learning

Decline Now: A Combinatorial Model for Algorithmic Collective Action

Federated Learning and Free-riding in a Competitive Market

FedGTST: Boosting Global Transferability of Federated Models via Statistics Tuning

FedCAP: Robust Federated Learning via Customized Aggregation and Personalization

Federated scientific machine learning for approximating functions and solving differential equations with data heterogeneity

Cyber Attacks Prevention Towards Prosumer-based EV Charging Stations: An Edge-assisted Federated Prototype Knowledge Distillation Approach

Towards Better Performance in Incomplete LDL: Addressing Data Imbalance

Towards Satellite Non-IID Imagery: A Spectral Clustering-Assisted Federated Learning Approach

Built with on top of