簡易檢索 / 詳目顯示

研究生: 陳姵霏
Pei-Fei Chen
論文名稱: 聯邦學習訓練延遲最佳化:基於強化學習之模型剪枝策略
Optimizing Training Delay of Federated Learning with Reinforcement Learning-based Pruning
指導教授: 賴源正
Yuan-Cheng Lai
口試委員: 賴源正
Yuan-Cheng Lai
楊傳凱
Chuan-Kai Yang
陳彥宏
Yen-Hung Chen
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 47
中文關鍵詞: 聯邦學習強化學習模型剪枝傳輸優化
外文關鍵詞: Federated learning, Reinforcement learning, Model pruning, Transmission optimization
相關次數: 點閱:114下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究旨在解決聯邦學習的效率問題,特別是異質客戶端所面臨的延遲問題。延遲問題主要源於(1)異步訓練延遲:由於客戶端的頻寬環境差異,導致上傳時間不一致,較快的客戶端需等待較慢的客戶端,從而延長整個訓練時間;(2)非獨立同分佈數據:客戶端的數據分布不均勻,模型在訓練過程中難以收斂。先前的研究提出了改進演算法,使用聯邦蒸餾技術和模型剪枝技術等解決方案,但在面對頻寬變動時效果有限。為了解決這些問題,本研究提出基於頻寬的聯邦學習差異權重剪枝演算法(Bandwidth-Based Federated Learning Differential Weight Pruning, FedDW)。該方法通過考慮網絡頻寬的變化和模型權重的特性來優化傳輸。FedDW著重保留重要的權重,特別是那些未收斂的權重,這些權重是提升模型準確性的關鍵因素。通過強化學習進行剪枝,根據當前網絡狀況與模型性能做出及時決策,決定哪些權重應該上傳。這樣既減少了傳輸量和時間,也確保了模型準確度的持續提升。實驗結果顯示,相比於FedAvg和FedProx這些需要上傳完整模型的方法,FedDW不僅節省了時間,還在精度上超過了使用剪枝技術的SynFlow方法,每個方法執行相同的交換次數,可以減少8.49%到15.39%的時間。在低帶寬、非獨立同分佈數據的環境下能夠節省傳輸時間與等待時間並維持準確度,這些結果證明了FedDW適用於多樣化的網絡條件及數據條件,有效減少聯邦學習的延遲問題。


    To address efficiency issues in federated learning (FL), particularly the latency problems faced by heterogeneous clients, this study examines the main sources of these issues: asynchronous training latency, caused by differences in clients' bandwidth environments leading to inconsistent upload times, and non-independent and identically distributed (Non-IID) data, where uneven data distribution among clients hampers model convergence. Previous solutions, including algorithm improvements, model distillation, and model pruning techniques, are limited when facing bandwidth variations. This study proposes a new method called bandwidth-based federated learning differential weight pruning (FedDW). FedDW optimizes transmission by considering network bandwidth variations and model weight characteristics. It focuses on retaining important, non-converged weights, which are crucial for improving model accuracy. Using reinforcement learning (RL), FedDW prunes weights and makes timely decisions based on current network conditions and model performance, determining which weights should be uploaded. This reduces transmission volume and time while ensuring continuous accuracy improvement. Experimental results demonstrate that compared to methods requiring the upload of full models, such as FedAvg and FedProx, FedDW not only saves time but also surpasses the SynFlow method, which uses pruning techniques, in accuracy. With the same number of exchanges, it can reduce time by 8.49% to 15.39%. In environments characterized by low bandwidth and Non-IID data, FedDW is capable of reducing transmission and waiting times while maintaining accuracy. These findings affirm that FedDW is well-suited for diverse network and data conditions, effectively mitigating latency issues in FL.

    摘要 I Abstract II 致謝 III Contents IV List of Table VI List of Figure VII Chapter 1 Introduction 1 Chapter 2 Literature Review 5 2.1 Enhancing Efficiency 5 2.1.1 Weight Pruning 5 2.1.2 Partial Model 7 2.2 Reinforcement Learning 8 2.3 Review of Literature on Efficiency Improvements in Federated Learning 11 Chapter 3 Problem Definition 16 3.1 System Model Framework 16 3.2 Problem Definition 18 Chapter 4 Proposed Method: FedDW 19 4.1 Overview 19 4.2 Phase 1. Deep Q-Networks Driven Model Sizing 22 4.3 Phase 2. Non-converging Weights Pruning 24 Chapter 5 Evaluation 27 5.1 Simulation Environment 27 5.2 Impact of Federated Weight Density 28 5.3 Impact of Network Bandwidth 33 5.4 Impact of the Number of Clients 35 5.5 Impact of Non-IID Data 39 Chapter 6 Conclusion and Future Work 42 Reference 44

    [1] C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A Survey on Federated Learning, “ Knowledge-Based Systems, vol. 216, 2021.
    [2] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in International Conference on Artificial Intelligence and Statistics, 2017, vol. 54, pp. 1273-1282.
    [3] J. Verbraeken, M. Wolting, J. Katzy, J. Kloppenburg, T. Verbelen, and J. S. Rellermeyer, “A Survey on Distributed Machine Learning,” Acm Computing Surveys (csur), vol. 53, pp. 1-33, 2020.
    [4] C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, “Splitfed: When Federated Learning Meets Split Learning,” in AAAI Conference on Artificial Intelligence, 2022, vol. 36, pp. 8485-8493.
    [5] J. Konečný, B. McMahan, and D. Ramage, “Federated Optimization: Distributed Optimization beyond the Datacenter,” arXiv preprint arXiv:1511.03575, 2015.
    [6] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated Optimization in Heterogeneous Networks,” Machine Learning and Systems, vol. 2, pp. 429-450, 2020.
    [7] H. Dai and Y. Hong, “Research on Model Optimization Technology of Federated Learning,” in International Conference on Big Data Analytics (ICBDA), 3–5 March 2023, pp. 107-112.
    [8] W. Y. B. Lim et al., “Federated Learning in Mobile Edge Networks: A Comprehensive Survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031-2063, 2020.
    [9] Z. Wang, H. Xu, J. Liu, H. Huang, C. Qiao, and Y. Zhao, “Resource-Efficient Federated Learning with Hierarchical Aggregation in Edge Computing,” in IEEE Conference on Computer Communications, 2021, pp. 1-10.
    [10] K. M. Ahmed, A. Imteaj, and M. H. Amini, “Federated Deep Learning for Heterogeneous Edge Computing,” in IEEE International Conference on Machine Learning and Applications (ICMLA), 2021, pp. 1146-1152.
    [11] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated Learning with Non-IID Data,” 2018.
    [12] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the Convergence of Fedavg on Non-IID Data,” in International Conference on Learning Representations (ICLR), 2019.
    [13] E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, and S.-L. Kim, “Communication-Efficient on-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data,” 2018.
    [14] H. Zhang, J. Bosch, H. H. Olsson, and A. C. Koppisetty, “AF-DNDF: Asynchronous Federated Learning of Deep Neural Decision Forests,” in Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2021, pp. 308-315.
    [15] H. Tanaka, D. Kunin, D. L. Yamins, and S. Ganguli, “Pruning Neural Networks without Any Data by Iteratively Conserving Synaptic Flow,” Advances in Neural Information Processing Systems, vol. 33, pp. 6377-6389, 2020.
    [16] L. Gowtham, B. Annappa, and D. N. Sachin, “FedPruNet: Federated Learning Using Pruning Neural Network,” in IEEE Region 10 Symposium (TENSYMP), 2022, pp. 1-6.
    [17] Z. Jiang, Y. Xu, H. Xu, Z. Wang, C. Qiao, and Y. Zhao, “FedMP: Federated Learning through Adaptive Model Pruning in Heterogeneous Edge Computing,” in IEEE International Conference on Data Engineering (ICDE), 2022, pp. 767-779.
    [18] Y. Jiang et al., “Model Pruning Enables Efficient Federated Learning on Edge Devices,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
    [19] M. Denil, B. Shakibi, L. Dinh, M. A. Ranzato, and N. De Freitas, “Predicting parameters in deep learning,” Advances in neural information processing systems, vol. 26, 2013.
    [20] M. Zhu and S. Gupta, “To Prune, or not to Prune: Exploring the Efficacy of Pruning for Model Compression,” 2017.
    [21] S. Lee, A. K. Sahu, C. He, and S. Avestimehr, “Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits,” Neurocomputing, vol. 556, 2023.
    [22] L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement Learning: A Survey,” Journal of Artificial Intelligence Research, vol. 4, pp. 237-285, 1996.
    [23] S. S. Mousavi, M. Schukat, and E. Howley, “Deep Reinforcement Learning: An Overview,” in SAI Intelligent Systems Conference (IntelliSys): Volume 2, 2018: Springer, pp. 426-440.
    [24] A. Kumar, A. Zhou, G. Tucker, and S. Levine, “Conservative Q-Learning for Offline Reinforcement Learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 1179-1191, 2020.
    [25] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine, “Reinforcement Learning with Deep Energy-based Policies,” in International Conference on Machine Learning, 2017.

    無法下載圖示 全文公開日期 2034/08/01 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE