簡易檢索 / 詳目顯示

研究生: 陳冠綸
Kuan-Lun Chen
論文名稱: 結合二維標籤與深度影像資訊之移動機器人自主導航系統開發
Development of an Autonomous Mobile Robot Navigation System with the Perception of Ground 2D Code and Depth Image
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 黃漢邦
Han-Pang Huang
劉益宏
Yi-Hung Liu
劉孟昆
Meng-Kun Liu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 93
中文關鍵詞: 虛擬雷達高速影像辨識室內導航
外文關鍵詞: Virtual LiDAR, High-speed Camera, Indoor Navigation
相關次數: 點閱:292下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 如何得知機器人周圍場景資訊是自主導航中重要的一環。二維雷射或三維雷射感測器為目前市面上導航系統主流感測器。然而其成本過高,因此本論文提出一結合二維碼影像定位與深度影像資訊之室內導航系統,採用影像視覺技術來達到室內導航定位與獲取空間三維場景資訊;其中本文設計二維標籤張貼於場地中,透過即時影像辨識定位技術,獲取機器人位置資訊來修正因里程計累積誤差造成的影響;另外為了有效獲取場景資訊,本文設計一虛擬雷達系統。此系統能夠有效將三維空間資訊轉換成具場景高度資訊二維平面圖。且為了解決攝影機死角問題,本論文結合了佔據地圖的概念,保留了機器人周遭場景資訊。
    在導航部分,透過動態視窗演算法(Dynamic Window Approach, DWA)進行路徑規劃,並於導航過程中以里程計與二維碼資訊作為路徑追隨的修正參考;在障礙物部分,則透過虛擬雷達來得知機器人與障礙物對應座標資訊。
    在實作上,本論文設計一組雙輪移動平台,搭載色彩深度攝影機與工業用高速攝影機,並於不同室內場景中進行自主導航,其結果顯示本論文提出方法能夠有效達到自主導航且成功閃避障礙物,同時有效地降低移動平台在感測器上的成本。


    It is an important part of autonomous navigation on how to obtain the scene information around the robot. Among them, two-dimension and three-dimension LiDAR are the mainstream on the current market. However, the needs high cost to achieve the goal. Hence, an indoor navigation system combining with 2D code image positioning and depth image information is proposed in this study. Using computer vision technology to accomplish indoor navigation and acquire 3D scene information. The 2D code tag are fixed on the ground. By utilizing real-time image recognizing and positioning system to resolve the current position of robot and reduce influences from the accumulative error of odometer. On the other hand, in order to get the scene information effectively, the study design a virtual LiDAR system. This system can convert 3D information into 2D layout with height information. In order to solve the dead zone problem of the camera, the research integrates the concept of Occupancy Grid Map and reserves the scene information around the robot. In the navigation section, by using Dynamic Window Approach to execute the path planning and apply the odometer and 2D code information as the correction reference for path following. The virtual LiDAR can parse the coordinate information between robot and obstacles.
    In practice, this study devises a two-wheel mobile platform with RGBD camera and industrial high-speed camera. By processing self-navigation under different scenarios the results illustrate that the approach proposed in this study can effectively achieve the self-navigation and obstacle avoidance and also significantly reduce the sensor cost on the mobile platform.

    指導教授同意書 i 口試委員會審定書 ii 致謝 iii 摘要 iv Abstract v 目錄 vi 表目錄 viii 圖目錄 ix 符號說明 xiii 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 3 1.3 文獻回顧 4 1.3.1 障礙物偵測 4 1.3.2 二維標籤導航與影像導航 5 1.3.3 自主避障導航 7 1.4 論文架構 9 第二章 實驗平台與控制設計 10 2.1 系統架構 10 2.2 硬體架構 11 2.3 雙輪里程計 14 2.4 顯示介面 15 第三章 影像定位與場景建置 17 3.1 二維標籤辨識定位系統 17 3.1.1影像前處理與辨識 19 3.1.2影像解碼與定位 26 3.2 障礙物辨識系統 30 3.2.1深度圖處理與點雲分割 31 3.2.2虛擬雷達 37 第四章 機器人自主導航與避障 41 4.1 動態視窗避障演算法 41 4.2 特殊場景避障與優化 46 第五章 實驗結果分析 51 5.1 二維標籤辨識定位實驗 51 5.2 障礙物檢測實驗 57 5.3 避障模擬與實驗 60 第六章 結論與未來研究 75 6.1 結論 75 6.2 未來研究方向 75 參考文獻 76

    [1] 林鈺博,「兩輪機器人之深度影像障礙物偵測與人臉識別」,碩士學位論文,國立臺灣師範大學,民國106年。
    [2] Yi-Chin Tsai, Kuan-Hung Chen, Yun Chen, and Jih-Hsiang Cheng, “Accurate and fast obstacle detection method for automotive applications based on stereo vision,” 2018 International Symposium on VLSI Design, Automation and Test (VLSI-DAT), pp. 1-4, 2018.
    [3] Navya Amin, and Markus Borschbach, “Quality of obstacle distance measurement using Ultrasonic sensor and precision of two Computer Vision-based obstacle detection approaches,” 2015 International Conference on Smart Sensors and Systems (IC-SSS), pp. 1-6, 2015.
    [4] Fitri Utaminingrum, M Ali Fauzi, Randy Cahya Wihandika, Sigit Adinugroho, Tri Astoto Kurniawan, Dahnial Syauqy, Yuita Arum Sari, and Putra Pandu Adikara, “Development of computer vision based obstacle detection and human tracking on smart wheelchair for disabled patient,” 2017 5th International Symposium on Computational and Business Intelligence (ISCBI),pp. 1-5, 2017.
    [5] He Mengwen, Eijiro Takeuchi, Yoshiki Ninomiya, and Shinpei Kato, “Robust virtual scan for obstacle Detection in urban environments,” 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 683-690, 2016.
    [6] 林為瑀,「基於深度學習的障礙物深度預估」,碩士學位論文,國立交通大學,民國106年。
    [7] Vinh Dinh Nguyen, Hau Van Nguyen, Dinh Thi Tran, Sang Jun Lee, and Jae Wook Jeon, “Learning Framework for Robust Obstacle Detection, Recognition, and Tracking,” IEEE Transactions on Intelligent Transportation Systems, Vol. 18, issue. 6, pp. 1633-1646, 2017.
    [8] Shuaixian Wang, Chenjiao Tan, Zheng Wang, Fuzeng Yang, and Zhijie Liu, “A method of visual measurement based on QR code in navigation and positioning of closed-type orchard,” 2018 4th International Conference on Computer and Technology Applications (ICCTA), pp. 39-45, 2018.
    [9] Pamarthi Ravi Teja, and A A Nippun Kumaar, “QR Code based Path Planning for Warehouse Management Robot,” 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 1239-1244, 2018.
    [10] Seok Ju Lee, Jongil Lim, Girma Tewolde, and Jaerock Kwon, “Autonomous tour guide robot by using ultrasonic range sensors and QR code recognition in indoor environment,” IEEE International Conference on Electro/Information Technology, pp. 410-415, 2014.
    [11] 陳勁榮,「兩輪移動平台影像追蹤控制與實現」,碩士學位論文,國立臺灣師範大學,民國106年。
    [12] 謝旻達,「圖形識別與路徑規劃於雙機器人巡邏與搭乘電梯之應用」,碩士學位論文,國立臺灣海洋大學,民國104年。
    [13] Guoxian Zheng, Lei Zhang, Huaxi Yulin Zhang, and Bo Ding, “Design of an Indoor Exploration and Multi-Objective Navigation System,” 2018 37th Chinese Control Conference (CCC), pp. 5451-5458, 2018.
    [14] Shirong Wang, Yuan Li, Yue Sun, Xiaobin Li, Ning Sun, Xuebo Zhang, and Ningbo Yu, “A localization and navigation method with ORB-SLAM for indoor service mobile robots,” 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 443-447, 2016.
    [15] Meng Chen, Zhihao Cai, and Yingxun Wang, “A method for mobile robot obstacle avoidance based on stereo vision,” IEEE 10th International Conference on Industrial Informatics, pp. 94-98, 2012.
    [16] Keigo Watanabe, Toshiyuki Kageyu, Shoichi Maeyama, and Isaku Nagai, “An obstacle avoidance method by combing image-based visual servoing and optical flow,” 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 677-682, 2015.
    [17] 連孟哲,「兩輪自主機器人之避障控制」,碩士學位論文,國立臺灣師範大學,民國105年。
    [18] Nikolaos Baras, Georgios Nantzios, Dimitris Ziouzios, and Minas Dasygenis, “Autonomous Obstacle Avoidance Vehicle Using LIDAR and an Embedded System,” 2019 8th International Conference on Modern Circuits and Systems Technologies (MOCAST), pp. 1-4, 2019.
    [19] Anish Pandey, Rakesh Kumar Sonkar, Krishna Kant Pandey, and D. R. Parhi, “Path planning navigation of mobile robot with obstacles avoidance using fuzzy logic controller,” 2014 IEEE 8th International Conference on Intelligent Systems and Control (ISCO), pp. 39-41, 2014.
    [20] Auday Al-Mayyahi, and William Wang, “Fuzzy inference approach for autonomous ground vehicle navigation in dynamic environment,” 2014 IEEE International Conference on Control System, Computing and Engineering (ICCSCE 2014), pp. 29-34, 2014.
    [21] 楊北辰,「結合視覺追蹤與自主避障於全向式運動載具之研究」,碩士學位論文,長庚大學,民國102年。
    [22] Do-Young Lee, Yan-Feng Lu, Tae-Koo Kang, In-Hwan Choi, and Myo-Taeg Lim, “3D vision based local obstacle avoidance method for humanoid robot,” 2012 12th International Conference on Control, Automation and Systems, pp. 473-475, 2012.
    [23] Canny. J., “A Computational Approach To Edge Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 18, pp. 679-714, 1968
    [24] Wikipedia,連通分量標記。檢自 https://zh.wikipedia.org/wiki/%E8%BF%9E%E9%80%9A%E5%88%86%E9%87%8F%E6%A0%87%E8%AE%B0 (May.22, 2020)
    [25] Wikipedia,k-d樹。檢自 https://zh.wikipedia.org/wiki/K-d%E6%A0%91 (May.22, 2020)
    [26] D. Fox , W. Burgard , S. Thrun, “The dynamic window approach to collision avoidance,” IEEE Robotics & Automation Magazine, Vol. 4, issue. 1, pp. 23-33, 1997
    [27] CSDN博客,Bug2演算法。檢自 https://blog.csdn.net/qq_34461089/article/details/84872854 (May.22, 2020)

    無法下載圖示 全文公開日期 2025/06/04 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE