簡易檢索 / 詳目顯示

研究生: 陳禾儒
He-Ru Chen
論文名稱: 具高效節能之移動擴增實境的環境感知卸載與能源管理
Adaptive Context-aware Offloading and Energy-aware Governing for Achieving Energy-efficient Augmented Reality on Mobile Devices
指導教授: 陳雅淑
Ya-Shu Chen
口試委員: 吳晉賢
Chin-Hsien Wu
謝仁偉
Jen-Wei Hsieh
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 45
中文關鍵詞: 移動擴增實境能源效率物體偵測排程
外文關鍵詞: Mobile Augmented Reality, Energy Efficiency, Object Detection, Scheduling
相關次數: 點閱:1061下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於深度學習的進步,擴增實境的應用整合了虛擬與現實,為使用者提供了更智能的互動。在移動設備上實施擴增實境的應用是很困難的,因為深度學習演算法的高計算複雜度,導致了在電池有限的移動設備上巨大的能量消耗。儘管伺服器輔助的移動擴增實境(MAR)系統提供了利用計算卸載的解決辦法,但低網路頻寬帶來的不可預測的傳輸延遲降低了物體檢測的準確性,進而導致用戶的體驗品質下降。為了最大限度地減少能源消耗,同時保持物體檢測的準確性,我們提出了一個包括環境感知卸載器和能源感知管理器的MAR系統框架。我們提出了包括檢測觸發器和卸載規劃器在內的環境感知卸載器,以便在不同的網路頻寬和不同的場景下保持物體檢測的準確性。然後,提出了包括性能分析器、核心配置器和執行緒排程器在內的能量感知管理器,用於執行自我學習的能量模型、運行時的動態電壓與頻率調節(DVFS)/動態電能管理(DPM)和執行緒管理,以實現節能的MAR系統。提出的框架在實際平臺上實現,並顯示我們提出的框架其能源消耗比最先進的方法有明顯的降低。


    Augmented reality applications integrate the virtual and real-world to provide more intelligent interaction for users thanks to advances in deep learning. Implement augmented reality applications on mobile devices is difficult from the high computation complexity of deep learning algorithms results in significant energy consumption on battery-constrained mobile devices. Although the server-assisted mobile augmented reality (MAR) systems provide the offloading solution for leverage the computation offloading, the unpredictable transmission delay from low network bandwidth degrade the accuracy of object detection and then results in low quality of experience for users. To minimize energy consumption while maintaining the accuracy of object detection, we propose a framework including the context-aware offloading and the energy-aware governing for MAR systems. The context-aware offloading including the detection trigger and the offloading planner is presented to maintain the accuracy of the object detection under the varied network bandwidth and the varied scene. The energy-aware governing including performance profiler, core configurator, and the thread dispatcher, is then presented for performing self-learning energy model, run-time DVFS/DPM, and thread management to achieve energy-efficient MAR systems. The proposed framework was implemented on the real platform and workloads, and the energy consumption reduces significantly over the state-of-the-art approaches.

    1 Introduction 2 Related Work 2.1 Computation Offloading 2.2 Energy-saving Management 3 System Model 3.1 MAR Application 3.2 System Architecture 4 Approach 4.1 Overview 4.2 Context-aware Engine 4.2.1 Detection Trigger 4.2.2 Offloading Planner 4.3 Energy-aware Governor 4.3.1 Performance Profiler 4.3.2 Core Configurator 4.3.3 Thread Dispatcher 5 Experimental Evaluation 5.1 Experimental Setup 5.2 Experimental Results 6 Conclusion References

    [1] H. Kim, J. L. Gabbard, A. M. Anon, and T. Misu, “Driver behavior and performance with augmented reality pedestrian collision warning: An outdoor user study,” IEEE transactions on visualization and computer graphics, vol. 24, no. 4, pp. 1515–1524, 2018.
    [2] P. A. Rauschnabel, R. Felix, and C. Hinsch, “Augmented reality marketing: How mobile ar-apps can improve brands through inspiration,” Journal of Retailing and Consumer Services, vol. 49, pp. 43–53, 2019.
    [3] E. Zhu, A. Hadadgar, I. Masiello, and N. Zary, “Augmented reality in healthcare education: an integrative review,” PeerJ, vol. 2, p. e469, 2014.
    [4] L. Liu, H. Li, and M. Gruteser, “Edge assisted real-time object detection for mobile augmented reality,” in The 25th Annual International Conference on Mobile Computing and Networking, 2019, pp. 1–16.
    [5] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
    [6] “Tensorflow lite,” https://www.tensorflow.org/lite/.
    [7] “Arcore sdks overview,” https://developers.google.com/ar/discover/.
    [8] Q. Liu and T. Han, “Dare: Dynamic adaptive mobile augmented reality with edge computing,” in 2018 IEEE 26th International Conference on Network Protocols (ICNP). IEEE, 2018, pp. 1–11.
    [9] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
    [10] X. Ran, H. Chen, Z. Liu, and J. Chen, “Delivering deep learning to mobile devices via offloading,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network, 2017, pp. 42–47.
    [11] I. Alghamdi, C. Anagnostopoulos, and D. P. Pezaros, “Time-optimized task offloading decision making in mobile edge computing,” in 2019 Wireless Days (WD). IEEE, 2019, pp. 1–8.
    [12] J. Ren, G. Yu, Y. Cai, and Y. He, “Latency optimization for resource allocation in mobile-edge computation offloading,” IEEE Transactions on Wireless Communications, vol. 17, no. 8, pp. 5506–5519, 2018.
    [13] J. Kumar, A. Malik, S. K. Dhurandher, and P. Nicopolitidis, “Demandbased computation offloading framework for mobile devices,” IEEE Systems Journal, vol. 12, no. 4, pp. 3693–3702, 2017.
    [14] T. Y.-H. Chen, L. Ravindranath, S. Deng, P. Bahl, and H. Balakrishnan, “Glimpse: Continuous, real-time object recognition on mobile devices,” in Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, 2015, pp. 155–168.
    [15] K.-W. Lim, J. Ha, P. Bae, J. Ko, and Y.-B. Ko, “Adaptive frame skipping with screen dynamics for mobile screen sharing applications,” IEEE Systems Journal, vol. 12, no. 2, pp. 1577–1588, 2016.
    [16] Q. Li, S. Wang, A. Zhou, X. Ma, A. X. Liu et al., “Qos driven task offloading with statistical guarantee in mobile edge computing,” IEEE Transactions on Mobile Computing, 2020.
    [17] Z. Lu, K. Chan, S. Pu, and T. La Porta, “Crowdvision: A computing platform for video crowdprocessing using deep learning,” IEEE Transactions on Mobile Computing, vol. 18, no. 7, pp. 1513–1526, 2018.
    [18] J. Ahn, J. Lee, S. Yoon, and J. K. Choi, “A novel resolution and power control scheme for energy-efficient mobile augmented reality applications in mobile edge computing,” IEEE Wireless Communications Letters, vol. 9, no. 6, pp. 750–754, 2019.
    [19] H. Wang and J. Xie, “User preference based energy-aware mobile ar system with edge computing,” in IEEE INFOCOM 2020-IEEE Conference on Computer Communications. IEEE, 2020, pp. 1379–1388.
    [20] J. Ahn, J. Lee, D. Niyato, and H.-S. Park, “Novel qos-guaranteed orchestration scheme for energy-efficient mobile augmented reality applications in multi-access edge computing,” IEEE Transactions on Vehicular Technology, vol. 69, no. 11, pp. 13 631–13 645, 2020.
    [21] H. Wang, B. Kim, J. Xie, and Z. Han, “Energy drain of the object detection processing pipeline for mobile devices: Analysis and implications,” IEEE Transactions on Green Communications and Networking, vol. 5, no. 1, pp. 41–60, 2020.
    [22] S. Huang, T. Han, and J. Xie, “A smart-decision system for realtime mobile ar applications,” in 2019 IEEE Global Communications Conference (GLOBECOM). IEEE, 2019, pp. 1–6.
    [23] X. Ran, H. Chen, X. Zhu, Z. Liu, and J. Chen, “Deepdecision: A mobile deep learning framework for edge video analytics,” in IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 2018, pp. 1421–1429.
    [24] C. Wang, S. Zhang, Z. Qian, M. Xiao, J. Wu, B. Ye, and S. Lu, “Joint server assignment and resource management for edge-based mar system,” IEEE/ACM Transactions on Networking, vol. 28, no. 5, pp. 2378–2391, 2020.
    [25] A. Younis, B. Qiu, and D. Pompili, “Latency-aware hybrid edge cloud framework for mobile augmented reality applications,” in 2020 17th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). IEEE, 2020, pp. 1–9.
    [26] Q. Liu, S. Huang, J. Opadere, and T. Han, “An edge network orchestrator for mobile augmented reality,” in IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 2018, pp. 756–764.
    [27] H. Chen, Y. Dai, H. Meng, Y. Chen, and T. Li, “Understanding the characteristics of mobile augmented reality applications,” in 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). IEEE, 2018, pp. 128–138.
    [28] W. Hu and G. Cao, “Energy-aware cpu frequency scaling for mobile video streaming.” in ICDCS, 2017, pp. 2314–2321.
    [29] M. Yan, C. A. Chan, A. F. Gygax, J. Yan, L. Campbell, A. Nirmalathas, and C. Leckie, “Modeling the total energy consumption of mobile network services and applications,” Energies, vol. 12, no. 1, p. 184, 2019.
    [30] Y. Dai, R. Zhang, R. Xue, B. Liu, and T. Li, “Toward efficient execution of mainstream deep learning frameworks on mobile devices: Architectural implications,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 3, pp. 453–466, 2020.
    [31] M. Hua, H. Tian, W. Ni, and S. Fan, “Energy efficient task offloading in noma-based mobile edge computing system,” in 2019 IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEE, 2019, pp. 1–7.
    [32] T. Q. Dinh, J. Tang, Q. D. La, and T. Q. Quek, “Offloading in mobile edge computing: Task allocation and computational frequency scaling,” IEEE Transactions on Communications, vol. 65, no. 8, pp. 3571–3584, 2017.
    [33] S. Song, J. Kim, and J.-M. Chung, “Energy consumption minimization control for augmented reality applications based on multi-core smart devices,” in 2019 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2019, pp. 1–4.
    [34] “Processing architecture for power efficiency and performance,” https:// www.arm.com/why-arm/technologies/big-little.
    [35] A. W. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: An experimental survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1442–1468, 2013.
    [36] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision. Springer, 2014, pp. 740–755.
    [37] A. Milan, L. Leal-Taix´e, I. Reid, S. Roth, and K. Schindler, “Mot16: A benchmark for multi-object tracking,” arXiv preprint arXiv:1603.00831, 2016.
    [38] “4k video dataset,” https://github.com/karolmajek/darknet-pjreddie.

    無法下載圖示 全文公開日期 2026/09/16 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE