簡易檢索 / 詳目顯示

研究生: 陳筱慧
Sheau-Huey Chen
論文名稱: 基於立體視覺與超音波感測資訊之行動機器人在室內環境的全域定位與區域路徑規劃
Global Localization and Local Path Planning of a Mobile Robot for Indoor Environments Using Stereo Vision and Ultrasonic Information
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 吳育德
Yu-Te Wu
徐演政
Yen-Tseng Hsu
李建德
Jiann-Der Lee
賈叢林
Tsorng-Lin Chia
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2008
畢業學年度: 96
語文別: 英文
論文頁數: 120
中文關鍵詞: 全域定位障礙物偵測地標超音波感測器立體視覺系統區域路徑規劃
外文關鍵詞: global localization, obstacle detection, landmarks, ultrasonic sensors, stereo vision system, local path planning
相關次數: 點閱:260下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 最近幾年,愈來愈多研究者投入自主性機器人的相關研究;在這些機器人中,譬如辦公室機器人、娛樂關懷型機器人、居家安全型機器人等,它們可以協助人們做日常生活中的事情。而機器人在室內環境導航的相關研究議題有:環境的感知、定位、路徑規劃、地圖建立、行動的執行…等等。在室內的環境中,準確且安全地的移動是這些機器人導航的重要基本能力之一。本論文的目的,就是要讓機器人可以知道自己的位置與周遭障礙物的遠近距離,進而能避免發生碰撞地閃開障礙物。

    在我們建構的機器人裡,立體視覺系統在導引中扮演著重要的角色,而超音波系統則是扮演著協助機器人能安全地移動。在導引的過程中,我們利用立體視覺與已知地標來定位出機器人的位置。在障礙物偵測中,我們結合影像的邊緣與顏色特徵來偵測障礙物;在實驗的環境裡,我們使用特定顏色的圓錐作為障礙物。在取得立體視覺與超音波感測器於環境中的資訊後,機器人可以規劃它的區域路徑,這個規劃後的路徑可以讓機器人安全地且正確地行走,而當機器人到達我們設定的目的地時,它會停止行走並結束導航的動作。

    經由我們的實驗獲得,全域的定位結果其角度的相對誤差範圍介於0°~10.4°,側面距離的相對誤差範圍介於0~0.21 m,而深度距離的相對誤差範圍介於0.05~0.12 m。這些準確的全域定位結果,可以充份地提供給區域路徑規劃作為導引機器人於室內環境中安全行走,且使其能航行於預期的軌道上。


    In recent years, there have been more and more researchers taking interest in the development of autonomous mobile robots. These robots can help people in daily life, such as office robot, entertainment robot, and security robot. There are many issues; for example, perception of environments, localization, path planning, map building, and execution action, about the robot navigation in indoor environments. To move accurately and safely in an indoor environment is one of the important functionality of robot navigation. The aim of this thesis is to make the robot can know its location and the position of the obstacles surrounding it, and guide the robot to avoid the obstacles without collision.

    On our experimental robot, the stereo vision system plays an important role during the navigation, and the ultrasonic sensors system acts as an assistant to help the robot walking safely. We adopt stereo vision techniques to localize robot's location using the known landmarks when the robot navigates in an indoor environment. In obstacle detection, we combine the edge and color features to detect obstacles; in the experiments, we use the specific colored cones as obstacles. After obtaining the environmental information from stereo vision and ultrasonic sensors, the robot can plan the local path. The planed path can keep the robot moving safely and correctly. Then the robot will stop the navigation if it reaches the destination where we set the place as the goal.

    The experiments reveal that the relative error range of the global localization results is about 0°~10.4° for the orientation, 0~0.21 m for the lateral location, and 0.05~0.12 m for the depth location. These global localization results are sufficient to provide for the local path planning which can guide the robot to move safely and to keep it on the expected trajectory in an indoor environment.

    中文摘要 i Abstract ii 致謝 iv Contents v List of Figures vii List of Tables xi Chapter 1 Introduction 1 1.1 Overview 1 1.2 Background and motivation 2 1.3 System description 4 1.4 Thesis organization 9 Chapter 2 Related Works 10 2.1 Reviews of global localization 10 2.2 Reviews of local path planning 14 Chapter 3 Range Sensors 19 3.1 Stereo vision 19 3.1.1 Height adjustment 20 3.1.2 Similarity measurement 22 3.1.3 3-D scenes reconstruction 25 3.2 Ultrasonic sensor 29 3.2.1 Distance measurement of the Ping))) ultrasonic sensors 30 3.2.2 The entity of the ultrasonic sensors system 33 3.2.3 Characteristics of the Ping))) ultrasonic sensors 36 Chapter 4 Perception of the Environment 39 4.1 Image preprocessing 39 4.1.1 Edge detection 40 4.1.2 Connected component labeling 42 4.1.3 Edge features 43 4.2 Obstacle detection 44 4.2.1 Color space transformation 44 4.2.2 Morphological operation 46 4.2.3 Obstacle detection strategy 49 4.3 Landmark extraction 52 4.3.1 Thinning 52 4.3.2 Hough transform 54 4.3.3 Landmark extraction strategy 56 Chapter 5 Robot Navigation 61 5.1 Global localization 63 5.1.1 Coordinate Systems 63 5.1.2 Location estimation by parallel lines 67 5.1.3 Location estimation by vertical lines 70 5.2 Local path planning 73 5.2.1 Free-space planning 74 5.2.2 Corridor planning 77 Chapter 6 Experimental Results and Discussions 79 6.1 System interface description 80 6.2 Global localization results 85 6.3 Navigation results 90 Chapter 7 Conclusions and Future Works 101 7.1 Conclusions 101 7.2 Future works 102 References 104

    [1] M. Y. Shieh, J. C. Hsieh, and C. P. Cheng, “Design of an Intelligent Hospital Service Robot and Its Application,” in Proceedings of the International Conference on Systems, Man, and Cybernetics, Hague, Netherlands, pp. 4377-4382, 2004.
    [2] C. Cauchois, F. de Chaumont, B. Marhic, L. Delahoche, and M. Delafosse, “Robotic Assistance: An Automatic Wheelchair Tracking and Following Functionality by Omnidirectional Vision,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robotic Systems, Edmonton, Alberta, Canada, pp. 2560-2565, 2005.
    [3] R. C. Luo, P. K. Wang, T. Y. Hsu, and T. Y. Lin, “Navigation and Mobile Security System of Intelligent Security Robot,” in Proceedings of the IEEE International Conference on Industrial Technology, pp. 260-265, Hong Kong, December, 2005.
    [4] D. Jung and A. Zelinsky, “Whisker Based Mobile Robot Navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Osaka, Japan, Vol. 2, pp. 497-504, 1996.
    [5] B. Tribelhorn and Z. Dodds, “Evaluating the Roomba: A Low-cost, Ubiquitous Platform for Robotics Research and Education,” in Proceedings of IEEE International Conference on Robotics and Automation, Roma, Italy, pp. 1394-1399, 2007.
    [6] M. H. Kim, S. C. Lee, and K. H. Lee, “Self-localization of Mobile Robot with Single Camera in Corridor Environment,” in Proceedings of the IEEE International Symposium on Industrial Electronics, Pusan, Korea, pp. 1619-1623, June, 2001.
    [7] S. Park, K. Kim, S. K. Park, and M. Park, “Object Entity-based Global Localization in Indoor Environment with Stereo Camera,” in Proceedings of the SICE-ICASE International Joint Conference, Busan, Korea, pp.2681-2686, October, 2006.
    [8] X. C. Lai, S. S. Ge, P. T. Ong, and A. A. Mamun, “Incremental Path Planning Using Partial Map Information for Mobile Robots,” in Proceedings of the IEEE International Conference on Control, Automation, Robotics and Vision, Singapore, pp. 1-6, December, 2006.
    [9] S. Se, D. G. Lowe, and J. J. Little, “Vision-Based Global Localization and Mapping for Mobile Robots,” IEEE Transactions on Robotics, Vol. 2, No. 3, pp 364-375, 2005.
    [10] A. Ohya, A. Kosaka, and A. Kak, “Vision-based Navigation by a Mobile Robot with Obstacle Avoidance Using Single-Camera Vision and Ultrasonic Sensing,” IEEE Transactions on Robotics and Automation, Vol. 14, No. 6, pp. 969-978, 1998.
    [11] S. F. Hernandez-Alamilla and E. F. Morales, “Global Localization of Mobile Robots for Indoor Environments Using Natural Landmarks,” in Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand, pp. 1-6, December, 2006.
    [12] L. Banjanović-Mehmedović, I. Petrović, and E. Ivanjko, “Mobile Robot Localization using Local Occupancy Grid Maps Transformations,” in Proceedings of the IEEE Conference on Power Electronics and Motion Control, Portoroz, Slovenia, pp. 1307-1312, August, 2006.
    [13] Z. F. Yang and W. H. Tsai, “Viewing Corridors as Right Parallelepipeds for Vision-Based Vehicle Localization,” IEEE Transactions on Industrial Electronics, Vol. 46, No. 3, pp. 653-661, June, 1999.
    [14] K. Kurihara and N. Nishiuchi, “Mobile Robot Path Planning Method with the Existence of Moving Obstacles,” in Proceedings of the IEEE Conference on Emerging Technologies and Factory Automation, Catania, Italy, pp. 195-202, September, 2005.
    [15] Q. Li, S. Xie, X. Tong, and G. Liu, “Path Planning Algorithm for vehicles Based on Time-dependent Optimization Criterion,” in Proceedings of the IEEE Conference on control and Automation, Guangzhou, China, pp. 2360-2364, May, 2007.
    [16] J. Miura and Y. Shirai, “Hierarchical Vision-Motion Planning with Uncertainty: Local Path Planning and Global Route Selection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Raleigh, North Carolina, pp. 1847-1854, July, 1992.
    [17] J. Hasegawa, K. Kurihara, and N. Nishiuchi, “Collision-free Path Planning Method for Mobile Robot,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hammamet, Tunisia, vol.3, pp.6, October, 2002.
    [18] H. Kim, K. Lee, S. Lee, and M. Park, “Obstacle Avoidance Navigation using a Local Path Planning,” in Proceedings of the IEEE Conference on Decision and Control, Kobe, Japan, pp. 869-874, December, 1996.
    [19] H. Z. Zhuang, S. X. Du, and T. J. Wu, “Real-Time Path Planning for Mobile Robots,” in Proceedings of the IEEE International Conference on Machine Learning and Cybernetics, pp. 526-531, Guangzhou, China, August, 2005.
    [20] B.C. Tsai, “Stereo Vision-Based Obstacle Avoidance for a Mobile Robot in an Indoor Environment,” Master Thesis, Department of Computer Science Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, 2007.
    [21] http://www.parallax.com
    [22] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd Ed., Addison-Wesley, Reading, Massachusetts, 1992.
    [23] K. Suzuki, I. Horiba, N. Sugie, “Linear-time connected-component labeling based on sequential local operations,” Source Computer Vision and Image Understanding Archive, vol. 89 , no. 1, pp. 1-23, 2003.
    [24] http://en.wikipedia.org/wiki/HSV_colour_space#Visualization_of_HSV
    [25] R. Vázquez-Martín, J. C. del Toro, A. Bandera, and F. Sandoval, “Data- and Model-driven Attention Mechanism for Autonomous Visual Landmark Acquisition,” in Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, pp. 3372-3377, April, 2005.
    [26] S. Jia, J. Sheng, D. Chugo, and K. Takase, “Obstacle Recognition for a Mobile Robot in indoor Environments using RFID and Stereo Vision,” in Proceedings of the IEEE International Conference on Mechatronics and Automation, Harbin, China, pp. 2789-2794, August, 2007.
    [27] T. Y. Zhang and C. Y. Suen, “A Fast Parallel Algorithm for Thinning Digital Patterns,” Communications of ACM, Vol. 27, No. 3, pp. 236-239, 1984.
    [28] G. Jang, S. Kim, W. Lee, and I. Kweon, “Robust Self-localization of Mobile Robots using Artificial and Natural Landmarks,” in Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, Kobe, Japan, pp. 412-417, July, 2003.
    [29] F. Lerasle, J. Carbajo, M. Devy, and J. B. Hayet, “Environment Modeling for Topological Navigation using Visual Landmarks and Range Data,” in Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, Taiwan, pp.1330-1335, September, 2003.
    [30] S. Se, D. Lowe, and J. Little, “Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features,” in Proceedings of the IEEE International Conference on Robotics and Automation, Seoul, Korea, pp. 2051-2058, May, 2001.
    [31] J. B, Hayet, F. Lerasle, and M. Devy, “A Visual Landmark Framework for Indoor Mobile Robot Navigation,” in Proceedings of the IEEE International Conference on Robotics and Automation, Arlington, Washington, D.C., pp. 3492-3947, May, 2002.
    [32] M. Tomono and S. Yuta, “Indoor Navigation based on an Inaccurate Map using Object Recognition,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, pp.619-624, October, 2002.
    [33] Y. Negishi, J. Miura, and Y. Shirai, “Mobile Robot Navigation in Unknown Environments Using Omnidirectional Stereo and Laser Range Finder,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, pp. 2737-2742, September/October, 2004.
    [34] Y. H. Chen, “Path Planning and Obstacle Avoidance Based on Stereo Vision for Autonomous Guided Cart,” Master Thesis, Department of Electrical Engineering, National Changhua University of Education, Changhua, Taiwan, 2006.
    [35] J. Miura, Y. Negishi, and Y. Shirai, “Mobile Robot Map Generation by Integrating Omnidirectional Stereo and Laser Range Finder,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, pp. 250-255, October, 2002.
    [36] N. Ouerhani and H. Hügli, “Robot Self-localization Using Visual Attention,” in Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 309-314, Espoo, Finland, June, 2005.
    [37] C. Fennema, A. Hanson, E. Riseman, J. R. Beveridge, and R. Kumar, “Model-Directed Mobile Robot Navigation,” IEEE Transactions on System, Man, and Cybernetics, Vol. 20, No. 6, pp.1352-1369, 1990.

    QR CODE