Basic Search / Detailed Display

Author: 張芸甄
Yun-Chen Chang
Thesis Title: 在Device-Edge-Cloud運算環境中基於強化學習之卸載決策方法
Reinforcement Learning-based Offloading in a Device-Edge-Cloud Computing Environment
Advisor: 賴源正
Yuan-Cheng Lai
Committee: 賴源正
Yuan-Cheng Lai
Bor-Shen Lin
Yen-Hung Chen
Degree: 碩士
Department: 管理學院 - 資訊管理系
Department of Information Management
Thesis Publication Year: 2022
Graduation Academic Year: 110
Language: 中文
Pages: 44
Keywords (in Chinese): 任務卸載強化學習QoS違反機率
Keywords (in other languages): Task Offloading, Reinforcement Learning, QoS Violation Probability
Reference times: Clicks: 214Downloads: 0
School Collection Retrieve National Library Collection Retrieve Error Report

現今的行動應用程式對於運算資源的需求越來越大,然而設備(Device)本身的運算資源有限,因此可卸載部分任務至雲端(Cloud)或邊緣端(Edge),故找到一個最佳卸載策略相當重要。目前的研究大多都是以所有網路節點的佇列長度作為卸載決策依據,然而在實際環境中,設備本身無法得知其他節點的狀況,若要取得資訊則須其他節點傳來,如此會因為傳播延遲而無法取得及時資訊。此外,大部分研究都須執行所有任務,但是有些任務可能無法在延遲限制內完成,還會消耗大量運算資源。為了解決上述問題,我們提出一種基於強化學習的卸載決策方法Reinforcement Learning-based Offloading with Outstanding Tasks(RLOO),其使用Deep Q Network(DQN)方法來訓練模型。RLOO以Device本身及卸載至Edge與Cloud未完成任務的工作負載作為狀態,如此Device可依自身的資訊推測目前各服務路徑的狀況,以此判斷是否要卸載任務,且除了決定是否卸載外,RLOO亦會丟棄任務,以節省運算資源,降低任務超時失敗的機率。RLOO會根據任務是否丟棄及任務的服務時間給予相對應的獎勵,並根據獲得的獎勵修改策略,以找到能達到最小化QoS違反機率之最佳卸載策略。研究結果表明,在可以選擇將任務丟棄的情況下,只考慮Device的佇列長度之方法(DevQ)的QoS違反機率比同時考慮所有節點的佇列長度之方法(AllQ)高67.91%,而RLOO只比AllQ高21.53%,且RLOO可以將任務丟棄之方法的QoS違反機率比無法將任務丟棄之方法低36.79%。

Today's mobile applications require more computing resources, but Device has limited computing resources, so it may offload some tasks to the Cloud or Edge. Thus it is important to find an optimal offloading strategy. Most of the current research determines offloading decision based on the queue length of all network nodes, including devices, edges, and cloud. However, in the realistic environment, a device cannot know the status of other nodes. To obtain information, it must be transmitted from other nodes, but the prorogation delay causes that the device can not obtain the real-time information. Moreover, most of the researches handle all tasks. However, some tasks may not be completed within the delay constraint, but they still consume a lot of computing resources. To solve the above problems, we propose Reinforcement Learning-based Offloading with Outstanding Tasks (RLOO) by adopting the Deep Q Network (DQN). RLOO uses the workload of outstanding tasks in the device, edge and cloud as the state, so that the device can infer the current status of each path based on its own information to determine whether offloading the task or not. Moreover, RLOO also handles task dropping to save computing resources and reduce the probability of task timeout. RLOO gives rewards according to whether the task is dropped and the task’s service time, and modifies the strategy according to the obtained reward to find the optimal offloading strategy, which minimizes QoS violation probability. The results show that when the task can be dropped, the method called the queue length of the Device (DevQ) performs worse than the method called the queue length of all nodes (AllQ) by 67.91%, and RLOO performs worse than AllQ only by 21.53%. Moreover, RLOO with dropping performs better than RLOO without dropping by 36.79%.

摘要 I Abstract II 誌謝 III 目錄 IV 表目錄 VI 圖目錄 VII 第壹章 緒論 1 第貳章 背景 4 一、 強化學習 4 二、 RL-based Offloading相關研究 5 第參章 系統與問題陳述 9 一、 系統模型 9 二、 問題陳述 10 第肆章 研究方法 12 一、 RLOO方法 12 二、 Deep Q Network 14 三、 演算法 15 第伍章 實驗與分析 18 一、 模擬環境與參數 18 二、 任務到達率 20 三、 工作負載 23 四、 Edge與Cloud的距離 25 五、 延遲容忍度 28 第陸章 結論與未來展望 30 參考文獻 31

[1] Y. Mao, C. You, J. Zhang, K. Huang and K. B. Letaief, "A Survey on Mobile Edge Computing: The Communication Perspective," IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322-2358, 2017.
[2] N. Abbas, Y. Zhang, A. Taherkordi and T. Skeie, "Mobile Edge Computing: A Survey," IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450-465, Feb. 2018.
[3] H. Lin, S. Zeadally, Z. Chen, H. Labiod and L. Wang, "A Survey on Computation Offloading Modeling for Edge Computing," Journal of Network and Computer Applications, vol. 169, pp. 102781, Nov. 2020.
[4] C. You, K. Huang, H. Chae and B. -H. Kim, "Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading," IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397-1411, Mar. 2017.
[5] F. Wang, J. Xu, X. Wang and S. Cui, "Joint Offloading and Computing Optimization in Wireless Powered Mobile-Edge Computing Systems," IEEE Transactions on Wireless Communications, vol. 17, no. 3, pp. 1784-1797, Mar. 2018.
[6] L. Ji and S. Guo, "Energy-Efficient Cooperative Resource Allocation in Wireless Powered Mobile Edge Computing," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4744-4754, June 2019.
[7] Y. Zhang, D. Niyato and P. Wang, "Offloading in Mobile Cloudlet Systems with Intermittent Connectivity," IEEE Transactions on Mobile Computing, vol. 14, no. 12, pp. 2516-2529, Dec. 2015.
[8] H. Ko, J. Lee and S. Pack, "Spatial and Temporal Computation Offloading Decision Algorithm in Edge Cloud-Enabled Heterogeneous Networks," IEEE Access, vol. 6, pp. 18920-18932, 2018.
[9] Z. Gao, W. Hao, R. Zhang and S. Yang, "Markov Decision Process-based Computation Offloading Algorithm and Resource Allocation in Time Constraint for Mobile Cloud Computing," IET Communications, vol. 14, no. 13, pp. 2068-2078, Aug. 2020.
[10] X. Chen, L. Jiao, W. Li and X. Fu, "Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing," IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795-2808, Oct. 2016.
[11] S. Jošilo and G. Dán, "A Game Theoretic Analysis of Selfish Mobile Computation Offloading," IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, pp. 1-9, May 2017.
[12] S. Zhou and W. Jadoon, "The Partial Computation Offloading Strategy based on Game Theory for Multi-User in Mobile Edge Computing Environment," Computer Networks, vol. 178, pp. 107334, Sep. 2020.
[13] Y. Mao, J. Zhang, S. H. Song and K. B. Letaief, "Power-Delay Tradeoff in Multi-User Mobile-Edge Computing Systems," 2016 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, Dec. 2016.
[14] H. Zhao, W. Du, W. Liu, T. Lei and Q. Lei, "QoE Aware and Cell Capacity Enhanced Computation Offloading for Multi-Server Mobile Edge Computing Systems with Energy Harvesting Devices," 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pp. 671-678, Oct. 2018.
[15] N. Nouri, A. Entezari, J. Abouei, M. Jaseemuddin and A. Anpalagan, "Dynamic Power–Latency Tradeoff for Mobile Edge Computation Offloading in NOMA-Based Networks," IEEE Internet of Things Journal, vol. 7, no. 4, pp. 2763-2776, Apr. 2020.
[16] J. Heydari, V. Ganapathy and M. Shah, "Dynamic Task Offloading in Multi-Agent Mobile Edge Computing Networks," 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, Dec. 2019.
[17] Q. Chen, Z. Kuang and L. Zhao, "Multiuser Computation Offloading and Resource Allocation for Cloud–Edge Heterogeneous Network," IEEE Internet of Things Journal, vol. 9, no. 5, pp. 3799-3811, Mar. 2022.
[18] T. Zheng, J. Wan, J. Zhang and C. Jiang, "Deep Reinforcement Learning-based Workload Scheduling for Edge Computing," Journal of Cloud Computing, vol. 11, no. 1, Jan. 2022.
[19] X. Qiu, W. Zhang, W. Chen and Z. Zheng, "Distributed and Collective Deep Reinforcement Learning for Computation Offloading: A Practical Perspective," IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 5, pp. 1085-1101, May 2021.
[20] X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji and M. Bennis, "Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4005-4018, June 2019.
[21] J. Baek and G. Kaddoum, "Heterogeneous Task Offloading and Resource Allocations via Deep Recurrent Reinforcement Learning in Partial Observable Multifog Networks," IEEE Internet of Things Journal, vol. 8, no. 2, pp. 1041-1056, Jan. 2021.
[22] Y. Chen, W. Gu and K. Li, "Dynamic Task Offloading for Internet of Things in Mobile Edge Computing via Deep Reinforcement Learning," International Journal of Communication Systems, 2022.
[23] I. Parvez, A. Rahmati, I. Guvenc, A. I. Sarwat and H. Dai, "A Survey on Low Latency Towards 5G: RAN, Core Network and Caching Solutions," IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 3098-3130, 2018.
[24] A. Coronato, M. Naeem, G. De Pietro and G. Paragliola, "Reinforcement Learning for Intelligent Healthcare Applications: A Survey," Artificial Intelligence in Medicine, vol. 109, pp. 101964, Sep. 2020.
[25] M. Sewak, "Deep reinforcement learning," Singapore: Springer Singapore, 2019.
[26] K. Arulkumaran, M. P. Deisenroth, M. Brundage and A. A. Bharath, "Deep Reinforcement Learning: A Brief Survey," IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26-38, Nov. 2017.
[27] H. Mao, M. Alizadeh, I. Menache and S. Kandula, " Resource Management with Deep Reinforcement Learning," Proceedings of the 15th ACM Workshop on Hot Topics in Networks, pp. 50–56, Nov. 2016.

無法下載圖示 Full text public date 2025/09/22 (Intranet public)
Full text public date 2032/09/22 (Internet public)
Full text public date 2032/09/22 (National library)