Download PDFOpen PDF in browser

Optimizing Federated Learning for Non-Iid Data With Hierarchical Knowledge Distillation

EasyChair Preprint 15814

13 pagesDate: February 11, 2025

Abstract

Federated Learning (FL) has emerged as a promising solution for decentralized model training, but its effectiveness is significantly hindered by Non-Independent and Identically Distributed (Non-IID) data across clients. Traditional aggregation techniques struggle to maintain model stability and convergence under such heterogeneous conditions. This paper introduces Cross-Modal Gradient Synchronization (CMGS), a novel optimization approach designed to enhance FL on Non-IID data by aligning and synchronizing gradient updates across diverse data modalities. The proposed method leverages gradient alignment mechanisms, adaptive weighting strategies, and consensus-based synchronization to mitigate the impact of data heterogeneity. Experimental evaluations on benchmark datasets demonstrate that CMGS achieves superior model accuracy, faster convergence, and improved robustness compared to conventional FL techniques such as FedAvg and FedProx. Additionally, the approach is computationally efficient and scalable, making it well-suited for real-world applications. Future research directions include extending CMGS to large-scale FL networks, improving energy efficiency, and enhancing security measures.

Keyphrases: Cross-Modal Gradient Synchronization, Federated Learning, Gradient Alignment, Model Convergence, Non-IID Data, adaptive aggregation, data heterogeneity

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15814,
  author    = {Lorenzaj Harris},
  title     = {Optimizing Federated Learning for Non-Iid Data With Hierarchical Knowledge Distillation},
  howpublished = {EasyChair Preprint 15814},
  year      = {EasyChair, 2025}}
Download PDFOpen PDF in browser