簡易檢索 / 詳目顯示

研究生: 彭志鑫
Zhi-Xin Peng
論文名稱: 於階層式網路中聯盟式學習之訓練延遲最佳化
Training Delay Optimization of Federated Learning in the Hierarchical Networks
指導教授: 賴源正
Yuan-Cheng Lai
口試委員: 賴源正
Yuan-Cheng Lai
林伯慎
Bor-Shen Lin
陳彥宏
Yen-Hung Chen
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2023
畢業學年度: 112
語文別: 英文
論文頁數: 34
中文關鍵詞: 聯盟式學習階層式網路訓練延遲最佳化工作量分配
外文關鍵詞: Federated Learning, Hierarchical network, Training delay optimization, Workload allocation
相關次數: 點閱:457下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 傳統的集中式機器學習面臨著資料隱私洩漏的風險,故而聯盟式學習(Federated Learning, FL)被提出以改善此問題。在FL的相關研究中,已有一些著重於探討FL準確度的改進及FL通訊效率的提升,然而要將FL實際運作於網路架構中,需考量其在常見的階層式網路之運作效能。由於FL訓練出的全域模型是由網路節點使用各自資料集訓練出的區域模型聚合,再經多次迭代而成最終模型。如此區域模型訓練的工作量會與迭代次數對準確度而言會有權衡關係,而不同的工作量及迭代次數會產生不同的FL訓練延遲,本論文即為在階層式網路藉由調整區域模型工作量及迭代次數來探討FL訓練延遲之最佳化。

    針對此問題本論文提出一種以最小FL訓練延遲為目標的工作量分配方法,稱為訓練延遲最佳化(Training Delay Optimization, TDO),TDO首先藉由已知的資源參數計算運算延遲及通訊延遲,而後推導準確度、工作量與迭代次數之間的關係式,並藉由前述步驟取得之公式進一步推導訓練延遲之函數,最終鑒於該函數為凸函數,故而選擇使用梯度下降法作為凸函數最佳化方法,以取得最小FL訓練延遲之工作量分配。實驗結果顯示,當準確度、模型大小、工作量任一者增加時,訓練延遲皆無法避免地會增加,相反地當運算容量或鏈路頻寬提升時,則能有效降低訓練延遲。在工作量1至100的區間內,相較於區間內兩端點的訓練延遲,使用TDO之最佳分配可以使訓練延遲分別改進76.10%與40.86%。


    Traditional centralized machine learning faces the risk of data privacy leakage, and hence, Federated Learning (FL) is proposed to improve this problem. Some studies on FL have focused on the improvement of FL accuracy and FL communication efficiency, while to implement FL in a network architecture, it is necessary to consider its performance in common hierarchical networks. Since the FL-trained global model is aggregated from local models trained by network nodes with their own datasets, and several iterations are performed to obtain the final global model. Hence, there is a trade-off between the workload for training local models and the number of iterations in terms of accuracy. Further, the different workload and number of iterations will result in different FL total training delays. And therefore, in this thesis, we investigate the optimization of FL total training delay by tuning the workload of the local model and the number of iterations in a hierarchical network.

    To address this problem, this thesis proposes a workload allocation method with the goal of minimizing FL total training delay, called Training Delay Optimization (TDO). TDO first calculates the computing delay and transmission delay by using known resource parameters, then derives the relationship between accuracy, workload, and number of iterations, and uses the equations obtained from the preceding steps to further derive the function for the total training delay. Finally, since the function is convex, the gradient descent method is chosen as the convex function optimization method to obtain the workload allocation of the minimum FL total training delay. The experimental results show that when accuracy, model size, or workload increases, the total training delay inevitably increases, while on the contrary, the total training delay can be effectively reduced when the computing capacity or link bandwidth increases. Compared to the total training delay at two endpoints of the workload range from 1 to 100, the optimal allocation of workload and the number of iterations obtained using TDO can improve the total training delay by 76.10% and 40.86%, respectively.

    摘要 I Abstract II 致謝 III Contents IV List of Figures VI List of Tables VII 1 Introduction 1 2 Background 4 2.1 Federated Learning 4 2.2 Related Work 6 3 System Model and Problem Formulation 8 3.1 System Model 8 3.2 Performance of FL 9 3.3 Problem Statement 12 4 Training Delay Optimization 13 4.1 The TDO concept 13 4.2 Calculation of the total training delay 14 4.2.1 The accuracy 14 4.2.2 The total training delay 14 4.3 Proof of convexity 16 4.4 TDO algorithm 18 5 Evaluation 20 5.1 Scenarios and parameters 20 5.2 The effect of global model accuracy 20 5.3 The effect of local model workload 22 5.4 The effect of computing capacity in T1 23 5.5 The effect of uplink bandwidth capacity in T1 25 5.6 The effect of local model size 26 6 Conclusions 29 References 30

    [1] J. Verbraeken, M. Wolting, J. Katzy, J. Kloppenburg, T. Verbelen, and J. S. Rellermeyer, “A survey on distributed machine learning,” ACM Computing Surveys (CSUR), vol. 53, no. 2, pp. 1–33, 2020, ISSN: 0360-0300.
    [2] S. AbdulRahman, H. Tout, H. Ould-Slimane, A. Mourad, C. Talhi, and M. Guizani, “A survey on federated learning: The journey from centralized to distributed on-site learning and beyond,” IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5476– 5497, 2020, ISSN: 2327-4662.
    [3] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, PMLR, 2017, pp. 1273–1282.
    [4] M. Zhang, G. Zhu, S. Wang, et al., “Communication-efficient federated edge learning via optimal probabilistic device scheduling,” IEEE Transactions on Wireless Communications, 2022, ISSN: 1536-1276.
    [5] Y. Deng, F. Lyu, J. Ren, et al., “Auction: Automated and quality-aware client selection framework for efficient federated learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 8, pp. 1996–2009, 2021, ISSN: 1045-9219.
    [6] A. Sultana, M. M. Haque, L. Chen, F. Xu, and X. Yuan, “Eiffel: Efficient and fair scheduling in adaptive federated learning,” IEEE Transactions on Parallel and Distributed Systems, 2022, ISSN: 1045-9219.
    [7] B. Gong, T. Xing, Z. Liu, W. Xi, and X. Chen, “Adaptive client clustering for efficient federated learning over non-iid and imbalanced data,” IEEE Transactions on Big Data, 2022, ISSN: 2332-7790.
    [8] L. Ye and V. Gupta, “Client scheduling for federated learning over wireless networks: A submodular optimization approach,” in 2021 60th IEEE Conference on Decision and Control (CDC), IEEE, 2021, pp. 63–68, ISBN: 166543659X.
    [9] S. Hong and J. Chae, “Communication-efficient randomized algorithm for multi-kernel online federated learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, ISSN: 0162-8828. 3.
    [10] F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-iid data,” IEEE transactions on neural networks and learning systems, vol. 31, no. 9, pp. 3400–3413, 2019, ISSN: 2162-237X.
    [11] J. Xu, W. Du, Y. Jin, W. He, and R. Cheng, “Ternary compression for communication-efficient federated learning,” IEEE Transactions on Neural Networks and Learning Systems, 2020, ISSN: 2162-237X.
    [12] J. Kang, X. Li, J. Nie, et al., “Communication-efficient and cross-chain empowered federated learning for artificial intelligence of things,” IEEE Transactions on Network Science and Engineering, 2022, ISSN: 2327-4697.
    [13] D. Jhunjhunwala, A. Gadhikar, G. Joshi, and Y. C. Eldar, “Adaptive quantization of model updates for communication-efficient federated learning,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2021, pp. 3110–3114, ISBN: 1728176050.
    [14] Z. Lian, J. Cao, Y. Zuo, W. Liu, and Z. Zhu, “Agqfl: Communication-efficient federated learning via automatic gradient quantization in edge heterogeneous systems,” in 2021 IEEE 39th International Conference on Computer Design (ICCD), IEEE, 2021, pp. 551–558, ISBN: 1665432195.
    [15] W. Gao, Z. Zhao, G. Min, Q. Ni, and Y. Jiang, “Resource allocation for latency-aware federated learning in industrial internet of things,” IEEE Transactions on Industrial Informatics, vol. 17, no. 12, pp. 8505–8513, 2021, ISSN: 1551-3203.
    [16] Y. Lu, X. Huang, K. Zhang, S. Maharjan, and Y. Zhang, “Low-latency federated learning and blockchain for edge association in digital twin empowered 6g networks,” IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 5098– 5107, 2020, ISSN: 1551-3203.
    [17] T. Wang, Y. Li, Y. Wu, and T. Q. Quek, “Secrecy driven federated learning via cooperative jamming: An approach of latency minimization,” IEEE Transactions on Emerging Topics in Computing, 2022, ISSN: 2168-6750.
    [18] S. Yue, J. Ren, J. Xin, D. Zhang, Y. Zhang, and W. Zhuang, “Efficient federated meta-learning over multi-access wireless networks,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 5, pp. 1556–1570, 2022, ISSN: 0733-8716. 3.
    [19] S. Liu, G. Yu, R. Yin, J. Yuan, L. Shen, and C. Liu, “Joint model pruning and device selection for communication-efficient federated edge learning,” IEEE Transactions on Communications, vol. 70, no. 1, pp. 231–244, 2021, ISSN: 0090-6778.
    [20] J. Feng, L. Liu, Q. Pei, and K. Li, “Min-max cost optimization for efficient hierarchical federated learning in wireless edge networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 11, pp. 2687–2700, 2021, ISSN: 1045-9219.
    [21] D. Shi, L. Li, M. Wu, et al., “To talk or to work: Dynamic batch sizes assisted time efficient federated learning over future mobile edge devices,” IEEE Transactions on Wireless Communications, 2022, ISSN: 1536-1276.
    [22] S. Luo, X. Chen, Q. Wu, Z. Zhou, and S. Yu, “Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 10, pp. 6535–6548, 2020, ISSN: 1536-1276.
    [23] K. Wei, J. Li, M. Ding, et al., “User-level privacy-preserving federated learning: Analysis and performance optimization,” IEEE Transactions on Mobile Computing, 2021, ISSN: 1536-1233.
    [24] S. Wang, T. Tuor, T. Salonidis, et al., “Adaptive federated learning in resource constrained edge computing systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205–1221, 2019, ISSN: 0733-8716.
    [25] D. Qiao, G. Liu, S. Guo, and J. He, “Adaptive federated learning for non-convex optimization problems in edge computing environment,” IEEE Transactions on Network Science and Engineering, 2022, ISSN: 2327-4697.
    [26] J. Liu, H. Xu, L. Wang, et al., “Adaptive asynchronous federated learning in resource-constrained edge computing,” IEEE Transactions on Mobile Computing, 2021, ISSN: 1536-1233.
    [27] L. Li, M. Duan, D. Liu, et al., “Fedsae: A novel self-adaptive federated learning framework in heterogeneous systems,” in 2021 International Joint Conference on Neural Networks (IJCNN), IEEE, 2021, pp. 1–10, ISBN: 1665439009. 3.
    [28] S. S. Shinde, A. Bozorgchenani, D. Tarchi, and Q. Ni, “On the design of federated learning in latency and energy constrained computation offloading operations in vehicular edge computing systems,” IEEE Transactions on Vehicular Technology, vol. 71, no. 2, pp. 2041–2057, 2021, ISSN: 0018-9545.
    [29] Y. Shen, Y. Qu, C. Dong, F. Zhou, and Q. Wu, “Joint training and resource allocation optimization for federated learning in uav swarm,” IEEE Internet of Things Journal, 2022, ISSN: 2327-4662.
    [30] J. Zhang, X. Cheng, C. Wang, et al., “Fedada: Fast-convergent adaptive federated learning in heterogeneous mobile edge computing environment,” World Wide Web, pp. 1–28, 2022, ISSN: 1573-1413.
    [31] G. Zhu, Y. Wang, and K. Huang, “Broadband analog aggregation for low-latency federated edge learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 491–506, 2019, ISSN: 1536-1276.
    [32] G. Zhu, Y. Du, D. Gündüz, and K. Huang, “One-bit over-the-air aggregation for communication-efficient federated edge learning: Design and convergence analysis,” IEEE Transactions on Wireless Communications, vol. 20, no. 3, pp. 2120– 2135, 2020, ISSN: 1536-1276.
    [33] Y. Zhou, Q. Ye, and J. Lv, “Communication-efficient federated learning with compensated overlap-fedavg,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 1, pp. 192–205, 2021, ISSN: 1045-9219.
    [34] J. Guo, J. Wu, A. Liu, and N. N. Xiong, “Lightfed: An efficient and secure federated edge learning system on model splitting,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 11, pp. 2701–2713, 2021, ISSN: 1045-9219.
    [35] Z. Lian, Q. Yang, W. Wang, et al., “Deep-fel: Decentralized, efficient and privacy-enhanced federated edge learning for healthcare cyber physical systems,” IEEE Transactions on Network Science and Engineering, 2022, ISSN: 2327-4697.
    [36] Y. Zheng, S. Lai, Y. Liu, X. Yuan, X. Yi, and C. Wang, “Aggregation service for federated learning: An efficient, secure, and more resilient realization,” IEEE Transactions on Dependable and Secure Computing, 2022, ISSN: 1545-5971. 3.
    [37] X. Guo, Z. Liu, J. Li, et al., “V eri fl: Communication-efficient and fast verifiable aggregation for federated learning,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 1736–1751, 2020, ISSN: 1556-6013.
    [38] A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 19 586–19 597, 2020.
    [39] A. Nilsson, S. Smith, G. Ulm, E. Gustavsson, and M. Jirstrand, “A performance evaluation of federated learning algorithms,” in Proceedings of the second workshop on distributed infrastructures for deep learning, 2018, pp. 1–8.
    [40] S. R. Pandey, N. H. Tran, M. Bennis, Y. K. Tun, A. Manzoor, and C. S. Hong, “A crowdsourcing framework for on-device federated learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3241–3256, 2020, ISSN: 1536-1276.
    [41] Z. Yang, M. Chen, W. Saad, C. S. Hong, and M. Shikh-Bahaei, “Energy efficient federated learning over wireless communication networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 3, pp. 1935–1949, 2020, ISSN: 1536-1276.
    [42] S. R. Pandey, M. N. Nguyen, T. N. Dang, et al., “Edge-assisted democratized learning toward federated analytics,” IEEE Internet of Things Journal, vol. 9, no. 1, pp. 572–588, 2021, ISSN: 2327-4662.
    [43] J. Liu, G. Zhang, K. Wang, and K. Yang, “Task-load-aware game-theoretic framework for wireless federated learning,” IEEE Communications Letters, vol. 27, no. 1, pp. 268–272, 2022, ISSN: 1089-7798.
    [44] X. Wang, X. Ren, C. Qiu, Y. Cao, T. Taleb, and V. C. Leung, “Net-in-ai: A computing-power networking framework with adaptability, flexibility, and profitability for ubiquitous ai,” IEEE Network, vol. 35, no. 1, pp. 280–288, 2020, ISSN: 0890-8044.

    無法下載圖示 全文公開日期 2033/09/28 (校內網路)
    全文公開日期 2033/09/28 (校外網路)
    全文公開日期 2033/09/28 (國家圖書館:臺灣博碩士論文系統)
    QR CODE