簡易檢索 / 詳目顯示

研究生: 張雨錚
Yu-cheng Chang
論文名稱: RGB-D與廣角視覺感測器整合之移動機器人導航系統
RGB-D and Wide-Angle Visual Sensor Integration and Fusion in Mobile Robot Navigation System
指導教授: 李敏凡
Min-Fan Ricky Lee
口試委員: 邱士軒
Shih-Hsuan Chiu
鄭智湧
Chih-Yung Cheng
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2012
畢業學年度: 100
語文別: 英文
論文頁數: 100
中文關鍵詞: 影像伺服控制自我定位地圖建設感測器整合導航廣角攝影機RGB-D攝影機Kinect影像里程計RANSAC三維建模深度影像路徑整合
外文關鍵詞: visual servo control, localization, mapping, sensor integration, navigation, Wide-Angle camera, RGB-D camera, Kinect, visual odometry, RANSAC, 3D modeling, depth map, path integration
相關次數: 點閱:255下載:12
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於影像伺服系統已被廣泛的應用於真實居家照顧系統中,如何使用基礎且商業化的影像設備做為居家看護機器人的導航系統並完成任務成為一個值得挑戰的課題。全域的視覺感測器常見於居家看護應用中,但其因家具等物體存在遮蔽的問題,若僅依賴全景視覺系統會造成導航系統不夠強健及穩定。增加區域性的視覺於居家看護系統成為必需的課題,而如何整合全域與區域地圖來控制機器人也是值得研究的題目。
    此篇論文中提出一室內視覺控制系統應用於輪型機器人(Pioneer 3-DX)。此系統包含全域影像系統與區域影像系統來導航機器人閃避障礙物並且抵達目標點 。全域廣角視覺系統負責提供全場景的資訊給控制器。在此篇論文中使用代克思托演算法(Dijkstra's algorithm)用於尋找最短路徑。區域視覺系統使用的RGB-D攝影機為微軟Kinect感測器其負責解決全域遮蔽的問題並且建置環境的三維模型同時藉由影像里程計的方法運行自我定位功能。所搜尋到的被遮蔽目標點會被使用於地圖及路徑整合用於抵達所有目標點。
    此系統能夠有效地完成指定任務僅藉由兩攝影機裝置。Kinect視覺系統能在全域系統無法獲得資訊時提供區域訊息並且定位機器人位置,導航系統因此不再僅依靠全域系統並且整合全域與區域路徑。實驗數據顯示此系統能延伸應用於居家照顧系統。


    Visual servo system has been applied for real homecare system widely. How to navigate robot and finish missions in a real indoor homecare system by using some basic vision devices become the challenging problems. Commonly the global ceiling visual sensor is used in the homecare application but it exist the hidden problem because of the furniture etc. The navigation system is not robust and stable enough if it only relies on the global vision. To add a local visual system is necessary in homecare system. How to integrate the global and local map to control robot is also an investigative issue.
    This paper proposes the indoor visual control system on a wheeled mobile platform (Pioneer 3-DX) which include global eye-to-hand and local eye-in-hand system to navigate the mobile robot to avoid the obstacles and arrive the goal. The global eye-to-hand wide-angle ceiling camera which we call global vision system offers the global information for the controller. This thesis uses the Dijkstra's algorithm to compute the shortest path. The local eye-in-hand local RGB-D camera (Microsoft Kinect which we call local vision system) is responsible for solving the hidden problem, rebuilding the environment 3D model and doing the localization by using visual odometry method. The searched hidden target position will be considered for map and path integration for arriving all the goals.
    This system can finish the mission and cooperate effectively by just two camera devices. The Kinect vision system provides local information and locates robot pose when the global ceiling system is not available. The navigation system does not have to rely on the global system only and integrates the global and local paths together. The result shows that this system can be extended and applied in the homecare system.

    ABSTRACT 中文摘要 List of Figures List of Tables Chapter 1 - Introduction 1.1 Background and Motivation 1.2 Literature Review 1.2.1 Global vision system 1.2.2 Local target searching system 1.2.3 Simultaneous Localization and Mapping 1.3 Purpose 1.4 Contribution 1.5 Organization Chapter 2 - Analysis 2.1 Mobile Robot 2.2 Visual servo device 2.2.1 Global visual servo system 2.2.2 Local visual servo system 2.3 Autonomous Visual Navigation 2.3.1 Camera Calibration 2.3.2 Feature Detection and Matching 2.3.3 Visual Odometry Chapter 3 - Method 3.1 System overview 3.1.1 Global visual system 3.1.2 Local vision system 3.2 Fundamental machine vision algorithm 3.2.1 YCbCr color mode 3.2.2 Threshold 3.2.3 Image Morphology 3.2.4 Shape fitting 3.2.5 Distance transform 3.2.6 Corner detection 3.2.7 Camera Calibration 3.3 Feature Matching 3.4 Visual Odometry 3.4.1 Incremental Reconstruction of Camera Path and Environment 3.4.2 Absolute Orientation Problem 3.4.3 Transformation to the Global Coordinate Frame 3.5 Path Integration 3.5.1 Global path planning 3.5.2 Local and global path integration Chapter 4 - Result 4.1 Global mapping result 4.2 Accuracy of Kinect 4.3 3D environment modeling 4.4 Visual Odometry 4.4.1 Base motion localization 4.4.2 Global hidden problem 4.5 Navigation system without hidden targets 4.6 Local hidden target searching 4.7 Global and Local map integration 4.8 Navigation system with hidden targets Chapter 5 – Conclusion and Future work 5.1 Conclusion 5.2 Future work References Biography

    [1] C. L. Chen and M. R. Lee, "Global path planning in mobile robot using omnidirectional camera," in 2011 International Conference on Consumer Electronics, Communications and Networks, CECNet 2011 , pp. 4986-4989, 2011.
    [2] Z. Y. Chang, "Goal seeking system based on panoramic vision technology in automous mobile robot," Master of Science, Graduate Institute of Automation and Control, National Taiwan University of Science and Technology, Taipei, 2011.
    [3] H. E. Lee, "Target tracking and following system for a wheeled mobile robot," Master of Science, Graduate Institute of Automation and Control, National Taiwan University of Science and Technology Taipei, 2010.
    [4] C. Ivancsits, "Visual navigation system for small unmanned aerial vehicles," Master of Science, Graduate Institute of Automation and Control, National Taiwan University of Science and Technology, Taipei, 2010.
    [5] M. Fiala and A. Ufkes, "Visual odometry using 3-dimensional video input," in 2011 Canadian Conference on Computer and Robot Vision (CRV), pp. 86-93, 2011.
    [6] K. Ohno, T. Nomura, and S. Tadokoro, "Real-time robot trajectory estimation and 3D map construction using 3D camera," in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5279-5285, 2006.
    [7] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, "RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments," The International Journal of Robotics Research, vol. 31, pp. 647-663, 2012.
    [8] E. W. Dijkstra, "A note on two problems in connexion with graphs," Numerische Mathematik, vol. 1, pp. 269-271, 1959.
    [9] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, "BRIEF: Binary robust independent elementary features," in 11th European Conference on Computer Vision, ECCV 2010, pp. 778-792, 2010.
    [10] D. Herrera C, J. Kannala, and J. Heikkila, "Accurate and practical calibration of a depth and color camera pair," in 14th International Conference on Computer Analysis of Images and Patterns, CAIP 2011, pp. 437-445, 2011.
    [11] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
    [12] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF)," Computer Vision and Image Understanding, vol. 110, pp. 346-359, 2008.
    [13] D. Nister, O. Naroditsky, and J. Bergen, "Visual odometry," in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, pp. I652-I659, 2004.
    [14] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24, pp. 381-395, 1981.
    [15] M. R. L. Y. C. Chang, "Global and local visual sensor integration on navigating the mobile robot," in submission process.
    [16] A. Fitzgibbon, M. Pilu, and R. B. Fisher, "Direct least square fitting of ellipses," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, pp. 476-480, 1999.
    [17] A. W. Fitzgibbon and R. B. Fisher, "A buyer's guide to conic fitting," Proceedings of the 6th British conference on Machine vision, vol. 2, pp. 513-522, 1995.
    [18] J. Shi and C. Tomasi, "Good features to track," in Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
    [19] E. Rosten and T. Drummond, "Fusing points and lines for high performance tracking," in Tenth IEEE International Conference on Computer Vision, ICCV 2005, vol. 2, pp. 1508-1515, 2005.
    [20] E. Rosten and T. Drummond, "Machine learning for high-speed corner detection," in 9th European Conference on Computer Vision, ECCV 2006, pp. 430-443, 2006.
    [21] N. Burrus. RGBDemo [GNU Lesser General Public License], Postdoc, University Carlos III of Madrid, Available: http://labs.manctl.com/rgbdemo/index.php
    [22] B. K. P. Horn, "Closed-form solution of absolute orientation using unit quaternions," Journal of the Optical Society of America A, vol. 4, pp. 629--642, 1987.

    QR CODE