ShanghaiTech University Knowledge Management System
UNDERSTANDING CONVERGENCE AND GENERALIZATION IN FEDERATED LEARNING THROUGH FEATURE LEARNING THEORY | |
2024 | |
会议录名称 | 12TH INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS, ICLR 2024 |
摘要 | Federated Learning (FL) has attracted significant attention as an efficient privacy-preserving approach to distributed learning across multiple clients. Despite extensive empirical research and practical applications, a systematic way to theoretically understand the convergence and generalization properties in FL remains limited. This work aims to establish a unified theoretical foundation for understanding FL through feature learning theory. We focus on a scenario where each client employs a two-layer convolutional neural network (CNN) for local training on their own data. Many existing works analyze the convergence of Federated Averaging (FedAvg) under lazy training with linearizing assumptions in weight space. In contrast, our approach tracks the trajectory of signal learning and noise memorization in FL, eliminating the need for these assumptions. We further show that FedAvg can achieve near-zero test error by effectively increasing signal-to-noise ratio (SNR) in feature learning, while local training without communication achieves a large constant test error. This finding highlights the benefits of communication for generalization in FL. Moreover, our theoretical results suggest that a weighted FedAvg method, based on the similarity of input features across clients, can effectively tackle data heterogeneity issues in FL. Experimental results on both synthetic and real-world datasets verify our theoretical conclusions and emphasize the effectiveness of the weighted FedAvg approach. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved. |
关键词 | Convolutional neural networks Machine learning Multilayer neural networks Network layers Privacy-preserving techniques Convergence properties Distributed learning Empirical research Feature learning Generalisation Learning Theory Local training Multiple clients Privacy preserving Test errors |
会议名称 | 12th International Conference on Learning Representations, ICLR 2024 |
会议地点 | Hybrid, Vienna, Austria |
会议日期 | May 7, 2024 - May 11, 2024 |
收录类别 | EI |
语种 | 英语 |
出版者 | International Conference on Learning Representations, ICLR |
EI入藏号 | 20243216835459 |
EI主题词 | Signal to noise ratio |
EI分类号 | 716 Telecommunication ; Radar, Radio and Television ; 716.1 Information Theory and Signal Processing ; 718 Telephone Systems and Related Technologies ; Line Communications ; 723 Computer Software, Data Handling and Applications ; 723.2 Data Processing and Image Processing ; 723.4 Artificial Intelligence |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/411258 |
专题 | 信息科学与技术学院_PI研究组_石野组 信息科学与技术学院_硕士生 |
通讯作者 | Shi, Ye |
作者单位 | 1.RIKEN AIP, Japan 2.Shanghaitech University, China 3.The University of Tokyo, RIKEN AIP, Japan |
通讯作者单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Huang, Wei,Shi, Ye,Cai, Zhongyi,et al. UNDERSTANDING CONVERGENCE AND GENERALIZATION IN FEDERATED LEARNING THROUGH FEATURE LEARNING THEORY[C]:International Conference on Learning Representations, ICLR,2024. |
条目包含的文件 | ||||||
条目无相关文件。 |
个性服务 |
查看访问统计 |
谷歌学术 |
谷歌学术中相似的文章 |
[Huang, Wei]的文章 |
[Shi, Ye]的文章 |
[Cai, Zhongyi]的文章 |
百度学术 |
百度学术中相似的文章 |
[Huang, Wei]的文章 |
[Shi, Ye]的文章 |
[Cai, Zhongyi]的文章 |
必应学术 |
必应学术中相似的文章 |
[Huang, Wei]的文章 |
[Shi, Ye]的文章 |
[Cai, Zhongyi]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。