簡易檢索 / 詳目顯示

研究生: 翟必君
Bi-chun Tsai
論文名稱: 基於立體視覺之行動機器人的室內環境避障
Stereo Vision-Based Obstacle Avoidance for a Mobile Robot in an Indoor Environment
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 王榮華
Jung-Hua Wang
林其禹
Cheng-Wen Lin
傅楸善
Chiou-Shann Fuh
范國清
Kuo-Chin Fan
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2007
畢業學年度: 95
語文別: 英文
論文頁數: 84
中文關鍵詞: 立體視覺系統重建3D立體場景障礙物偵測障礙物閃避
外文關鍵詞: Stereo vision system, 3D scenes reconstruction, obstacle detection, obstacle avoidance
相關次數: 點閱:165下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 最近,隨著機器人的高度發展,他們已經開始具備部分的思維以及辨識能力,
    甚至可以模仿人類所能做的事情,尤其可以在未知的環境中準確且安全地的移動,是這些機器人重要的基本能力之一。本論文的目的,就是要讓機器人可以知道周遭障礙物的遠近位置,並閃開環境中的障礙物而避免發生碰撞。相較於使用超音波和紅外線有搜索角度和觀測距離較小等缺點,我們係採用CCD攝影機擷取影像的方法來讓系統更有強健性。
    這篇論文所使用的影像處理技術,包含均值濾波、邊緣偵測、膨脹與侵蝕、相連元件偵測等,它們用來偵測環境中的障礙物;接著,利用兩台攝影機由於架設位置不同而導致成像時的視差來做障礙物3D位置的重建。我們先將右影像切成區塊,然後利用差異絕對值總和法在左影像中尋找對應區塊,以獲得區塊位置的偏移距離。我們也利用幾個限制條件來減小區塊搜尋區域,以增加區塊比對的執行速度。在得到該區塊的偏移距離後,利用線性代數的計算來推得該區塊的實際3D位置。
    藉由瞭解環境中的障礙物與機器人的遠近位置,我們可以利用這些資訊引導機器人閃開環境中的障礙物,以避免發生碰撞。首先,我們將3D的環境資訊投影到2D (X-Z)的地圖上,以提供機器人閃避障礙物的參考資訊;隨後,將地圖分為三個群組分別代表位於機器人前方、左邊以及右邊的障礙物狀況,假如前方有障礙物時,機器人會判斷左右何方較為安全,並向較安全的方向閃避障礙物。
    本論文所採用的方法皆是在低運算量的前提下,做到精準的估測,以達到即時輔助機器人行動的效果。利用這些方法重建物體3D位置可達到90%以上的準確率,並且可以引導機器人安全地閃避環境中的障礙物。


    With the high development of robots recently, they have had basic thinking and recognition abilities, even imitated human being doing some works. To move accurately and safely in an unknown environment is one of the important functionality of the robots. The aim of this thesis is to make these robots can know the locations of the obstacles surrounding them and avoid the obstacles in an indoor environment without collision. Being different from the sonar and infrared sensors that are limited by the searching angle and observation distance, we employ the CCD cameras to capture images to increase the robustness of our 3D scenes reconstruction system.
    In this thesis, we use image processing techniques, including average filtering, edge detection, dilation and erosion, and connected component labeling, to detect obstacles in an indoor environment. Then we utilize the imaging disparity of the two cameras resulting from the different setup locations to accomplish 3D scenes reconstruction. First, we conceptually divide the right image into blocks and adopt the sum of absolute differences to find the corresponding blocks in the left image. To efficiently achieve this, we adopt several constraints of decreasing searching areas, so that the execution time of block matching is drastically reduced. When the disparity of the corresponding blocks is found, we can reconstruct their real 3D location through algebraic calculations.
    With the locations of the obstacles surrounding a robot, we can guide it to avoid the obstacles in an indoor environment without collision. At the beginning, we project the 3D locations into a 2D (X-Z) map to provide the information of avoiding obstacles. After that, we categorize the map into three groups that respectively represent obstacles’ information in the front, left, and right directions of the robot. When there are obstacles in the front direction, the robot will decide which direction is safer, and move to the safer area to avoid obstacles.
    Although all the methods and algorithms proposed in this thesis are based on the prerequisite of low computation cost, they can attain the precise 3D location estimation to help robots move immediately. The reconstruction accuracy of our method can reach over 90%, which is sufficiently for guiding the robot to avoid the obstacles in an indoor environment safely.

    中文摘要……………………………………………………………I ABSTRACT…………………………………………………………II 致謝................................................IV CONTENTS…………………………………………………………V LIST OF FIGURES………………………………………………VII LIST OF TABLES…………………………………………………X CHAPTER 1 INTRODUCTION………………………………………1 1.1 Overview……………………………………………………1 1.2 Background and motivation……………………………2 1.3 System description……………………………………3 1.4 Thesis organization…………………………………8 CHAPTER 2 RELATED WORKS…………………………………9 2.1 Reviews of stereo matching………………………9 2.2 Reviews of obstacle detection…………………17 2.3 Reviews of obstacle avoidance…………………24 CHAPTER 3 STEREO SCENES RECONSTRUCTION …………30 3.1 Height adjustment…………………………………31 3.2 Similarity measurement………………………………………………33 3.3 Disparity search space simplification………37 3.4 3-D scenes reconstruction………………………40 CHAPTER 4 OBSTACLE DETECTION AND AVOIDANCE……44 4.1 Edge detection……………………………………45 4.2 Morphological operation………………………47 4.3 Connected component labeling………………49 4.4 Obstacle detection strategy………………52 4.5 Obstacle avoidance strategy………………55 CHAPTER 5 EXPERIMENTAL RESULTS AND DISCUSSIONS…………59 5.1 System interface description……………………………59 5.2 Obstacle detection and 3D reconstruction results…63 5.3 Obstacle avoidance results………………………………72 CHAPTER 6 CONCLUSIONS AND FUTURE WORKS……………………78 6.1 Conclusions…………………………………………………78 6.2 Future works………………………………………………79 REFERENCE………………………………………………………80

    [1] M. Y. Shieh, J. C. Hsieh, and C. P. Cheng, “Design of an Intelligent Hospital Service Robot and Its Application,” proceedings of the International Conference on Systems, Man, and Cybernetics, Hague, Netherlands, pp. 4377-4382, 2004.
    [2] C. Cauchois, F. de Chaumont, B. Marhic, L. Delahoche, and M. Delafosse, “Robotic Assistance: An Automatic Wheelchair Tracking and Following Functionality by Omnidirectional Vision,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robotic Systems, Edmonton, Alberta, Canada, pp. 2560-2565, 2005.
    [3] K. T. Song and J. H. Huang, “Fast Optical Estimation and Its Application to Real-time Obstacle Avoidance,” Proceedings of the IEEE International Conference on Robotics & Automation, Seoul, Korea, pp. 2891-2896, 2001.
    [4] H. Mori, S. Kotanni, and N. Kiyohiro, “A Robot Travel Aid “HITOMI”,” Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, Munich, Germany, Vol. 3, pp. 1716-1723, 1994.
    [5] D. Jung and A. Zelinsky, “Whisker Based Mobile Robot Navigation,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Osaka, Japan, Vol. 2, pp. 497-504, 1996.
    [6] B. Tribelhorn and Z. Dodds, “Evaluating the Roomba: A Low-cost, Ubiquitous Platform for Robotics Research and Education,” Proceedings of IEEE International Conference on Robotics and Automation, Roma, Italy, pp. 1394-1399, 2007.
    [7] S. Hayati, R. Volpe, P. Backes, J. Balaram, R. Welch, R. Ivlev, G. Tharp, S. Peters, T. Obm, R. Petras, and Sharon Laubach, “The Rocky 7 Rover: A Mars Sciencecraft Prototype,” Proceedings of the IEEE Interational Conference on Robotics and Automation, Albuquerque, New Mexico, pp. 2458-2464, 1997.
    [8] K. Sabe, M. Fukuchi, J. S. Gutmann, T. Ohashi, K. Kawamoto, and T. Yoshigahara, “Obstacle Avoidance and Path Planning for Humanoid Robots Using Stereo Vision,” Proceedings of the IEEE International Conference on Robotics & Automation, New Orleans, LA, pp. 592-597, 2004.
    [9] A. Elfes, “Using Occupancy Grids for Mobile Robot Perception and Navigation,” Computer Magazine, Vol. 22, No. 6, pp. 46-57, 1989.
    [10] J. Hancock, M. Hebert, and C. Thorpe, “Laser Intensity-Based Obstacle Detection Intelligent Robots and Systems,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robotic Systems, Butchart, Canada, Vol. 3, pp. 1541-1546, 1998.
    [11] T. Kanade and M. Okutomi, “A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 9, pp. 920-932, 1994.
    [12] H. Hirschmüller, P. R. Innocent and J. M. Garibaldi, “Real-Time Correlation-Based Stereo Vision with Reduced Border Errors,” International Journal of Computer Vision, Vol. 47, No. 1-3, pp. 229-246, 2002.
    [13] W. C. Chang and S. A. Lee, “Real-Time Feature-Based 3D Map Reconstruction for Stereo Visual Guidance and Control of Mobile Robots in Indoor Environments,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hague, Netherlands, Vol. 6, No. 6, pp. 5386-5391, 2004.
    [14] U. Franke and I. Kutzbach, “Fast Stereo Based Object Detection for Stop & Go,” Proceedings of the IEEE Intelligent Vehicles Symposium, Tokyo, Japan, pp. 339-344, 1996.
    [15] 周佳彥, 運用雙影像對應求取3D座標資訊-模擬退火法之應用, 碩士學位論文, 朝陽科技大學工業工程與管理所, 台中縣, 2003.
    [16] A. Broggi, M. Bertozzi, A. Fascioli, C. Guarino, L. Bianco, and A. Piazzi, “Visual Perception of Obstacles and Vehicles for Platooning,” IEEE Transactions on Intelligent Transportation Systems, Vol. 1, No. 3, pp. 164-176, 2000.
    [17] M. Betke, E. Haritaoglu, and L. Davis, “Multiple Vehicle Detection and Tracking in Hard Real-time,” Proceedings of the IEEE Intelligent Vehicles Symposium, Tokyo, Japan, pp. 351-356, 1996.
    [18] 葉英傑, 居家保全機器人之即時影像處理技術之設計與實現, 碩士學位論文, 國立成功大學電機工程學系, 台南市, 2003.
    [19] K. H. Park, H. O. Kim, and C. D. Kee, “Collision-free Path Planning of Stereo Vision Based Mobile Robots Using Power Potential Approach,” Proceedings of the International Conference on Control, Automation, Robotics and Vision, Singapore, 2002.
    [20] J. P. Fan, D. K. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic Image Segmentation by Integrating Color-edge Extraction and Seeded Region Growing,” IEEE Transaction on Image Processing, Vol. 10, No. 10, pp. 1454-1466, 2001.
    [21] 謝易錚, 以 立 體 視 覺 實 作 盲 人 輔 具 系 統, 碩士學位論文, 國立中央大學資訊工程學系, 桃園縣, 2006.
    [22] K. Okada, S. Kagami, M. Inaba, and H. Inoue, “Plane Segment Finder: Algorithm, Implementation and Applications,” Proceedings of IEEE Interational Conference on Robotics & Automation, Seoul, Korea, pp. 2120-2125, 2001.
    [23] K. Okada, M. Inaba, and H Inoue, “Integration of Real-time Binocular Stereo Vision and Whole Body Information for Dynamic Walking Navigation of Humanoid Robot,” Proceedings of the IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems, Tokyo, Japan, pp.131-136, 2003.
    [24] U. Franke and S. Heinrich , “Fast Obstacle Detection for Urban Traffic Situations”, IEEE Transactions on Intelligent Transportation Systems, Vol. 3, No. 3, pp. 173–181, 2002.
    [25] A. Stentz, “The Focused D* Algorithm for Real-time Replanning,” Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, Quebec, Canada, pp. 1652-1695,1995.
    [26] A. A. Razavian and J. Sun, “Cognitive Based Adaptive Path Planning Algorithm for Autonomous Robotic Vehicles,” Proceedings of the IEEE Southeast Conference, Fort Lauderdale, Florida, pp. 153-160, 2005.
    [27] A. Razavian, “The Numerical Object Rings Path Planning Algorithm (NORPPA),” Proceedings of the IEEE Conference on Decision and Control, Kobe, Japan, pp. 4406-4411, 1996.
    [28] J. Hasegawa, K. Kurihara, and N. Nishiuchi, “Collision-free Path Planning Method for Mobile Robot,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hammamet, Tunisia, 2002.
    [29] K. Kurihara and N. Nishiuchi, “Mobile Robot Path Planning Method with the Existence of Moving Obstacles,” Proceedings of the IEEE Conference on Emerging Technologies and Factory Automation, Catania, Italy, pp. 195-202, 2005.
    [30]C. J. Taylor and D. J. Kriegman, “Vision-based Motion Planning and Exploration Algorithms for Mobile Robots,” IEEE Transactions on Robotics and Automation, Vol. 14, No. 3, pp. 417-426, 1998.
    [31] A. Ohya, A. Kosaka, and A. Kak, “Vision-based Navigation by a Mobile Robot with Obstacle Avoidance Using Single-Camera Vision and Ultrasonic Sensing,” IEEE Transactions on Robotics and Automation, Vol. 14, No. 6, pp. 969-978, 1998.
    [32] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd Ed., Addison-Wesley, Reading, Massachusetts, 1992.
    [33] K. Suzuki, I. Horiba, N. Sugie, “Linear-time connected-component labeling based on sequential local operations,” Source Computer Vision and Image Understanding Archive, vol. 89 , no. 1, pp. 1-23, 2003.
    [34] J. Y. Bouguet, Camera Calibrationt Matlab Toolbox, Computer Vision Research Group, Electrical Engineering Department, California Institute of Technology, Pasadena, California, http://www.vision.caltech.edu/bouguetj/index.html.

    QR CODE