簡易檢索 / 詳目顯示

研究生: 楊惠竹
Hui-Chu Yang
論文名稱: 以增強式學習達成最佳服務品質滿足率之多資源網路切片
Multi-Resource Network Slicing with Deep Reinforcement Learning for Optimal QoS Satisfaction Ratio
指導教授: 賴源正
Yuan-Cheng Lai
口試委員: 楊傳凱
Chuan-Kai Yang
陳彥宏
Yen-Hung Chen
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 36
中文關鍵詞: 網路切片多資源分配增強式學習QoS滿足率
外文關鍵詞: Network Slicing, Multi-resource Allocation, Deep Reinforcement Learning, QoS Satisfaction
相關次數: 點閱:413下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 5G網路提供了低延遲和高速率的服務,縮短了傳輸時間並為使用者帶來更高效率的體驗。而在5G網路架構下,提供了包含通訊和運算的資源來滿足多種服務需求,因此適當的多資源配置是提升使用者服務品質 (Quality of Service, QoS) 很重要的因素,而網路切片技術根據其服務需求來做資源量的分配。目前有許多相關論文使用機器學習來進行最佳資源配置,此篇論文我們主要討論增強式學習的方式,考慮不同的參數調整和模擬環境設定帶來的影響。目前相關文獻研究考慮最佳分配量的目標有很多種,而我們以考量各服務的延遲分佈,提出了一種使用增強式學習並以達成最佳QoS滿足率為目標的多資源切割方法,稱之為Deep Reinforcement Learning-based Network Slicing with Maximum QoS Satisfaction (DRL-MQS),此方法概念為計算各服務的延遲分佈以獲得QoS滿足率,並使用增強式學習方式去尋找最佳的資源分配,以達成最佳的QoS滿足率。研究結果表明DRL-MQS 在預設環境下,相較於以資源利用率為目標的方法,可改進10.54%的QoS滿足率,並且在封包到達率增加時,DRL-MQS仍然有較好的QoS滿足率。


    The 5G networks provide the services with low latency, high speed, and large bandwidth, which shorten the transmission time and provide more efficient services to users. According to the 5G network architecture, communication and computation resources are provided to meet various requirements of the services. Therefore, proper resources allocation is an important factor to improve the quality of service (QoS) of users while network slicing allocates the resources according to their service requirements. However, there are many related works using the approach of machine learning for optimal resource allocation. In this paper, we focus on the deep reinforcement learning (DRL) approach, considering the impact of different settings of RL module parameters and simulation environment. We propose a deep reinforcement learning (DRL)-based network slicing with maximum QoS satisfaction (DRL-MQS) for multi-resource allocation. The concept of this method is to calculate the delay distribution of each service to obtain the QoS satisfaction ratio, and use the DRL approach to find the resource allocation with optimal QoS satisfaction ratio. The results show that the DRL-MQS can more satisfy the QoS requirements by 10.54% in comparison with the approach aiming at the resource utilization in the default environment. Moreover, when the packet arrival rate increases, the DRL-MQS still has better performance.

    摘要 Abstract List of Figures List of Tables Chapter 1 Introduction Chapter 2 Background 2.1 Deep reinforcement learning 2.2 Related work Chapter 3 System model and problem formulation 3.1 System model 3.2 Problem statement Chapter 4 Approach (DRL-MQS) 4.1 The DRL-MQS concept 4.2 The DRL-MQS algorithm 4.2.1 Overview 4.2.2 The module of DRL-MQS agent Chapter 5 Evaluation 5.1 Scenarios and parameters 5.2 The effects of packet arrival rate for three resources 5.3 The effects of packet arrival rate for four resources 5.4 The effects of training window 5.5 The effects of service rate between UE and edge Chapter 6 Conclusion and future work References

    [1] B. Han and H. D. Schotten, "Machine learning for network slicing resource management: a comprehensive survey," arXiv preprint arXiv:2001.07974, 2020.
    [2] G. L. Santos, P. T. Endo, D. Sadok, and J. Kelner, "When 5G meets deep learning: a systematic review," Algorithms, vol. 13, no. 9, p. 208, 2020.
    [3] M. Yan, G. Feng, J. Zhou, Y. Sun, and Y.-C. Liang, "Intelligent resource scheduling for 5G radio access network slicing," IEEE Transactions on Vehicular Technology, vol. 68, no. 8, pp. 7691-7703, 2019.
    [4] G. Kibalya, J. Serrat, J.-L. Gorricho, R. Pasquini, H. Yao, and P. Zhang, "A reinforcement learning based approach for 5G network slicing across multiple domains," in 2019 15th International Conference on Network and Service Management (CNSM), 2019: IEEE, pp. 1-5.
    [5] F. Foukalas, A. Tziouvaras, and G. Karetsos, "Cooperative cognitive network slicing virtualization for smart IoT applications," in 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications, 2020: IEEE, pp. 1-7.
    [6] X. Chen et al., "Multi-tenant cross-slice resource orchestration: A deep reinforcement learning approach," IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2377-2392, 2019.
    [7] F. Tao, D. Zhao, Y. Hu, and Z. Zhou, "Resource service composition and its optimal-selection based on particle swarm optimization in manufacturing grid system," IEEE Transactions on industrial informatics, vol. 4, no. 4, pp. 315-327, 2008.
    [8] Y.-J. Liu, G. Feng, Y. Sun, S. Qin, and Y.-C. Liang, "Device association for RAN slicing based on hybrid federated deep reinforcement learning," IEEE Transactions on Vehicular Technology, vol. 69, no. 12, pp. 15731-15745, 2020.
    [9] T. Li, X. Zhu, and X. Liu, "An end-to-end network slicing algorithm based on deep Q-learning for 5G network," IEEE Access, vol. 8, pp. 122229-122240, 2020.
    [10] M. Bouzid et al., "Cooperative AI-based E2E network slice scaling," in IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2019: IEEE, pp. 959-960.
    [11] Y. Kim, S. Kim, and H. Lim, "Reinforcement learning based resource management for network slicing," Applied Sciences, vol. 9, no. 11, p. 2361, 2019.
    [12] J. Li, H. Gao, T. Lv, and Y. Lu, "Deep reinforcement learning based computation offloading and resource allocation for MEC," in 2018 IEEE Wireless Communications and Networking Conference (WCNC), 2018: IEEE, pp. 1-6.
    [13] N. Van Huynh, D. T. Hoang, D. N. Nguyen, and E. Dutkiewicz, "Optimal and fast real-time resource slicing with deep dueling neural networks," IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1455-1470, 2019.
    [14] Q. Liu, T. Han, and E. Moges, "EdgeSlice: Slicing wireless edge computing network with decentralized deep reinforcement learning," in 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), 2020: IEEE, pp. 234-244.
    [15] H. Wang, Y. Wu, G. Min, J. Xu, and P. Tang, "Data-driven dynamic resource scheduling for network slicing: A deep reinforcement learning approach," Information Sciences, vol. 498, pp. 106-116, 2019.
    [16] Z. Wang, Y. Wei, F. R. Yu, and Z. Han, "Utility optimization for resource allocation in edge network slicing using drl," in GLOBECOM 2020-2020 IEEE Global Communications Conference, 2020: IEEE, pp. 1-6.
    [17] F. Rezazadeh, H. Chergui, L. Alonso, and C. Verikoukis, "Continuous multi-objective zero-touch network slicing via twin delayed DDPG and OpenAI gym," in GLOBECOM 2020-2020 IEEE Global Communications Conference, 2020: IEEE, pp. 1-6.
    [18] Y. Kim and H. Lim, "Multi-agent reinforcement learning-based resource management for end-to-end network slicing," IEEE Access, vol. 9, pp. 56178-56190, 2021.
    [19] S.-Y. Cheng, "5G Multi-Resource Network Slicing to Minimize QoS Violation Probability," Master, Department of Information Management, National Taiwan University of Science and Technology, 2021.

    無法下載圖示 全文公開日期 2033/03/29 (校內網路)
    全文公開日期 2033/03/29 (校外網路)
    全文公開日期 2033/03/29 (國家圖書館:臺灣博碩士論文系統)
    QR CODE