簡易檢索 / 詳目顯示

研究生: 黃昭穎
Chao-Yinh Huang
論文名稱: 於具行動邊緣運算網路中基於深度強化學習之考量差別服務快取及延遲限制的合作式工作卸載
Deep Reinforcement Learning based Cooperative Task Offloading Considering Prioritized Service Caching and Delay Constraints in the MEC-Enabled Network
指導教授: 馮輝文
Huei-Wen Ferng
口試委員: 林嘉慶
Jia-Chin Lin
張宏慶
Hung-Chin Jang
范欽雄
Ging-Xiong Fan
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 43
中文關鍵詞: 物聯網工作卸載行動邊緣運算服務快取強化學習
外文關鍵詞: Internet of Things, Task Offloading, Mobile Edge Computing, Service Caching, Reinforcement Learning
相關次數: 點閱:331下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

行動邊緣運算 (Mobile Edge Computing, MEC)為使用者提供了強大的運算能力和資料儲存空間,而進一步結合工作卸載 (Task Offloading)的機制能夠有效提升使用者的體驗,其中快取 (Cache)的配置更能進一步降低不必要的傳輸時間。另外,在現實的環境中,邊緣伺服器 (Edge Server)時常要同時服務多個使用者,因此,在有限的資源下使得部份對延遲要求較高的任務無法即時完成,導致使用者體驗下降。於是,本論文提出依任務延遲要求、運算量以及服務在使用者裝置之間被請求的熱門程度等,進行差別化處置的合作式卸載方法;而透過評估服務的特性和需求來調整快取的配置和卸載工作執行的順序 (Order),並輔以強化學習演算法,以成功提昇卸載工作能夠在其所需的時間內完成的比例;此外,也考量卸載請求的等待時間,避免部份卸載工作的延遲過長。最後,在不同快取大小與不同服務數量下進行模擬,模擬數據印證本論文提出之方法能夠有效提升卸載工作於要求之時間內完成的比例,而與其他使用深度強化學習方法相比,在延遲(Latency)與快取命中率(Hit Rate)方面也有較好的表現。


Mobile edge computing provides users the powerful computing capacity and larger storage space. Combined with the mechanism of task offloading, MEC can efficiently improve users' experience and decrease the unnecessary transmission time. In the real environment, edge servers usually have to provide services to many users at a time, some tasks with higher delay requirements may be unable to be completed in time because of the limited resource of edge servers, resulting in worse user experience. Therefore, we shall propose a cooperative offloading method along with caching placement in a differential manner according to the delay requirement, computation, and the popularity of services. We adjust the cache placement and execution order of tasks by considering the characteristics and the requirements of services with the help of deep reinforcement learning to increase the ratio of offloading tasks to be completed before its deadline. In addition, we also take the task waiting time into consideration to avoid the long latency for some offloading requests. Finally, our simulation results can confirm that our proposed method efficiently increases the ratio of offloading tasks to be completed before its deadline either in the scenario of different cache sizes or in the scenario of different numbers of services. Compared with the closely related methods in the literature, our proposed method outperforms them in terms of the latency and the cache hit rate.

論文指導教授推薦書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .i 考試委員審定書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ii 中文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iii 英文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .iv 誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v 目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vi 表目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix 圖目錄 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x 第一章、緒論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.1 研究背景 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2 行動運算技術簡介 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2.1 行動雲端運算 . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2.2 行動邊緣運算 . . . . . . . . . . . . . . . . . . . . . . . . . . .2 1.3 工作卸載 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.4 服務快取 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 1.5 研究動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 1.6 論文組織 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 第二章、相關文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 2.1 能源效率最佳化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 2.2 延遲最佳化演算法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 2.3 結合深度強化學習之卸載 . . . . . . . . . . . . . . . . . . . . . . . . .9 2.3.1 基於 DDPG 之卸載機制 . . . . . . . . . . . . . . . . . . . . . .9 2.3.2 基於 PPO 之卸載機制 . . . . . . . . . . . . . . . . . . . . . . .12 第三章、方法設計與流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 3.1 問題描述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 3.2 系統模型 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 3.2.1 系統架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 3.2.2 系統限制 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 3.3 方法設計 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 3.3.1 差別服務快取之設計 . . . . . . . . . . . . . . . . . . . . . . .18 3.3.2 優先權佇列 . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 3.3.3 延遲計算方式 . . . . . . . . . . . . . . . . . . . . . . . . . . .20 3.4 所採用之深度強化學習 . . . . . . . . . . . . . . . . . . . . . . . . . .21 3.4.1 Double DQN . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 3.4.2 狀態空間與行動空間 . . . . . . . . . . . . . . . . . . . . . . .23 3.4.3 獎勵函數與損失函數 . . . . . . . . . . . . . . . . . . . . . . .24 3.5 方法流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 第四章、模擬結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 4.1 模擬環境及參數設定 . . . . . . . . . . . . . . . . . . . . . . . . . . .28 4.2 結果比較與討論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 4.2.1 快取大小之影響 . . . . . . . . . . . . . . . . . . . . . . . . . .30 4.2.2 服務數量之影響 . . . . . . . . . . . . . . . . . . . . . . . . . .33 4.2.3 等待佇列之公平性 . . . . . . . . . . . . . . . . . . . . . . . .36 第五章、結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38 參考文獻 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39

[1] Q. Luo, S. Hu, C. Li, G. Li, and W. Shi, “Resource scheduling in edge computing: A survey,” IEEE Communications Surveys & Tutorials, vol. 23, no. 4, pp. 2131–2165, Aug. 2021.
[2] D. Georgakopoulos, P. P. Jayaraman, M. Fazia, M. Villari, and R. Ranjan, “Internet of things and edge cloud computing roadmap for manufacturing,” IEEE Cloud
Computing, vol. 3, no. 4, pp. 66–73, Sep. 2016.
[3] K. Peng, V. Leung, X. Xu, L. Zheng, J. Wang, and Q. Huang, “A survey on mobile edge computing: Focusing on service adoption and provision,” Wireless Communi-
cations and Mobile Computing, vol. 2018, Oct. 2018.
[4] S. Barbarossa, S. Sardellitti, and P. Di Lorenzo, “Communicating while computing: Distributed mobile cloud computing over 5G heterogeneous networks,” IEEE Signal Processing Magazine, vol. 31, no. 6, pp. 45–55, Oct. 2014.
[5] G. Zhang, S. Zhang, W. Zhang, Z. Shen, and L. Wang, “Joint service caching, computation offloading and resource allocation in mobile edge computing systems,”
IEEE Transactions on Wireless Communications, vol. 20, no. 8, pp. 5288–5300, Mar. 2021.
[6] J. Xu, L. Chen, and P. Zhou, “Joint service caching and task offloading for mobile edge computing in dense networks,” in IEEE INFOCOM 2018-IEEE Conference on
Computer Communications, pp. 207–215, Apr. 2018.
[7] X. Yang, Z. Fei, J. Zheng, N. Zhang, and A. Anpalagan, “Joint multi-user computation offloading and data caching for hybrid mobile cloud/edge computing,” IEEE
Transactions on Vehicular Technology, vol. 68, no. 11, pp. 11018–11030, Sep. 2019.
[8] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge computing—A key technology towards 5G,” ETSI white paper, vol. 11, no. 11, pp. 1–16, Sep. 2015.
[9] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” IEEE communications surveys & tu-torials, vol. 19, no. 4, pp. 2322–2358, Aug. 2017.
[10] P. A. Apostolopoulos, E. E. Tsiropoulou, and S. Papavassiliou, “Cognitive data offloading in mobile edge computing for Internet of things,” IEEE Access, vol. 8, pp. 55736–55749, Mar. 2020.
[11] Y. Hao, M. Chen, L. Hu, M. S. Hossain, and A. Ghoneim, “Energy efficient task caching and offloading for mobile edge computing,” IEEE Access, vol. 6, pp. 11365–11373, Mar. 2018.
[12] J. Zhang, Y. Shen, Y. Wang, X. Zhang, and J. Wang, “Dual timescale resource allocation for collaborative service caching and computation offloading in IoT systems,”IEEE Transactions on Industrial Informatics, pp. 1–11, Jun. 2022.
[13] S.-W. Ko, S. J. Kim, H. Jung, and S. W. Choi, “Computation offloading and service caching for mobile edge computing under personalized service preference,” IEEE Transactions on Wireless Communications, vol. 21, no. 8, pp. 6568–6583, Feb. 2022.
[14] X. Xia, F. Chen, Q. He, J. Grundy, M. Abdelrazek, and H. Jin, “Online collaborative data caching in edge computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 2, pp. 281–294, Aug. 2020.
[15] S. Bi, L. Huang, and Y.-J. A. Zhang, “Joint optimization of service caching placement and computation offloading in mobile edge computing systems,” IEEE Transactions on Wireless Communications, vol. 19, no. 7, pp. 4947–4963, Apr. 2020.
[16] W. Zhang, Z. Zhang, S. Zeadally, H.-C. Chao, and V. C. Leung, “MASM: A multiple-algorithm service model for energy-delay optimization in edge artificial in-
telligence,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4216–4224, Feb. 2019.
[17] B. Cao, L. Zhang, Y. Li, D. Feng, and W. Cao, “Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework,” IEEE Communications Magazine, vol. 57, no. 3, pp. 56–62, Mar. 2019.
[18] H. Zhao, Y. Wang, and R. Sun, “Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems,” in 2018 14th In-
ternational Wireless Communications & Mobile Computing Conference (IWCMC), pp. 232–237, Jun. 2018.
[19] S. Nath and J. Wu, “Deep reinforcement learning for dynamic computation offloading and resource allocation in cache-assisted mobile edge computing systems,” Intelligent and Converged Networks, vol. 1, no. 2, pp. 181–198, Sep. 2020.
[20] Y. Liu, S. Wang, M. S. Obaidat, X. Li, and P. Vijayakumar, “Service chain caching and workload scheduling in mobile edge computing,” IEEE Systems Journal, vol. 16, no. 3, pp. 4389–4400, Nov. 2021.
[21] D. Ren, X. Gui, and K. Zhang, “Adaptive request scheduling and service caching for mec-assisted iot networks: An online learning approach,” IEEE Internet of Things Journal, vol. 9, no. 18, pp. 17372–17386, Mar. 2022.
[22] W. Feng, H. Liu, Y. Yao, D. Cao, and M. Zhao, “Latency-aware offloading for mobile edge computing networks,” IEEE Communications Letters, vol. 25, no. 8, pp. 2673–2677, Apr. 2021.
[23] Y. Zhu, Y. Hu, and A. Schmeink, “Delay minimization offloading for interdependent tasks in energy-aware cooperative mec networks,” in 2019 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1–6, Apr. 2019.
[24] Y. Chen, Y. Sun, B. Yang, and T. Taleb, “Joint caching and computing service placement for edge-enabled IoT based on deep reinforcement learning,” IEEE Internet of Things Journal, vol. 9, no. 19, pp. 19501–19514, Apr. 2022.
[25] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM transactions on networking, vol. 24, no. 5, pp. 2795–2808, Oct. 2015.
[26] J. Yan, S. Bi, L. Duan, and Y.-J. A. Zhang, “Pricing-driven service caching and task offloading in mobile edge computing,” IEEE Transactions on Wireless Communications, vol. 20, no. 7, pp. 4495–4512, Feb. 2021.
[27] B. Jang, M. Kim, G. Harerimana, and J. W. Kim, “Q-learning algorithms: A comprehensive classification and applications,” IEEE access, vol. 7, pp. 133653–133667, Sep. 2019.
[28] T. N. Sainath, O. Vinyals, A. Senior, and H. Sak, “Convolutional, long short-term memory, fully connected deep neural networks,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 4580–4584, Apr. 2015.
[29] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level con-
trol through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
[30] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, Mar. 2016.
[31] E. Liang, R. Liaw, R. Nishihara, P. Moritz, R. Fox, J. Gonzalez, K. Goldberg, and I. Stoica, “Ray RLlib: A composable and scalable reinforcement learning library,”arXiv preprint arXiv:1712.09381, vol. 85, Dec. 2017.
[32] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever,
K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale
machine learning on heterogeneous systems,” 2015. Software available from Tensorflow.org.
[33] C. Zhong, M. C. Gursoy, and S. Velipasalar, “Deep reinforcement learning-based edge caching in wireless networks,” IEEE Transactions on Cognitive Communica-
tions and Networking, vol. 6, no. 1, pp. 48–61, Jan. 2020.
[34] R. K. Jain, D.-M. W. Chiu, W. R. Hawe, et al., “A quantitative measure of fairness and discrimination,” Eastern Research Laboratory, Digital Equipment Corporation, Hudson, MA, vol. 21, Sep. 1984.

無法下載圖示 全文公開日期 2024/09/29 (校內網路)
全文公開日期 2024/09/29 (校外網路)
全文公開日期 2024/09/29 (國家圖書館:臺灣博碩士論文系統)
QR CODE