簡易檢索 / 詳目顯示

研究生: 黃茂勳
Mao-Hsun Huang
論文名稱: 自主式移動機器人之立體視覺障礙物定位系統
Stereo Vision System for Obstacle Localization on Autonomous Mobile Robot
指導教授: 李敏凡
Min Fan Lee
口試委員: 許新添
Hsin-Teng Hsu
柯正浩
Cheng-Hao Ko
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2009
畢業學年度: 97
語文別: 英文
論文頁數: 83
中文關鍵詞: 相機校正極限幾何特徵點匹配景深計算
外文關鍵詞: feature points matching
相關次數: 點閱:285下載:11
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文主要研究是自主式移動式機器人的立體視覺障礙物定位,讓機器人擁有三維空間距離感,了解自身與環境間的幾何相對關係,可以完成更加複雜的任務。本論文主要是採用立體視覺方法,利用雙眼間的視差關係,用三角測量的方式計算出距離。全文可分為相機校正,空間幾何架構的建立,特徵點匹配和景深計算。首先,將攝影機進行校正,並利用校正後兩個攝影機與目標物間的幾何關係建立將擷取到的兩影像上的同一物體座標點進行特徵點匹配,最後計算該座標點之景深。經由上述之演算法建立非線性相機模型,應用於機器人之視覺定位。
相機校正部分,主要是測量相機的內部參數和外部參數,求出空間物體與影像之間的座標關係。
空間幾何架構部分,利用物體與兩攝影機間的的極限幾何關係,找出左、右兩影像的極限線段,可以加速匹配左、右影像的處理時間。
特徵點匹配部分,利用左、右影像間相似的關係,用演算法求出空間中同一位置點在左、右兩影像各別的pixel值。
景深計算部分,利用特徵點匹配所求出兩影像間的Disparity值,再使用三角幾何關係求出景深距離。
最後景深距離可以推算出環境的三維空間座標,判斷出障礙物與移動機器人間的位置關係,進行避障動作。


This thesis focuses on the obstacle localization of autonomous mobile robot by using stereo vision techniques. The robots equipped with stereo vision system use the geometrical relation between itself and surrounding environment to achieve the more complicated tasks. Stereo vision system adopts triangulation method to calculate the disparity. This thesis includes four procedures, camera calibration, epipolar geometry, feature point matching and depth calculation. At first, the camera will be calibrated and then followed by using the epipolar geometry between target and two cameras to obtain the feature point within two images which captured by two cameras. Subsequently, the depth of feature point is calculated. Finally, a nonlinear camera model is built by applying above four procedures to localize the obstacle localization for autonomous mobile robot.
Camera calibration is mainly to measure camera's internal and external parameters, and obtain the transformation matrix between image and world coordinate.
By using the Epipolar Geometry among object and the two corresponding image points, the Epipolar line can be found which is to reduce the correspondence time.
Feature Matching using the correspondence between two images and to find the two corresponding points in the two images for the same feature in the world coordinate.
Disparity will be generated and then followed by applying the triangulation method to obtain the depth. Then this depth can be further used to compute the 3-D coordinates for the surrounding environment, which can used to calculate the distance between obstacle and robot to further command robot to avoid this obstacle.

中文摘要 I ABSTRACT II 誌謝 III CONTENTS IV List of Tables V List of Figures VI Chapter 1 Introduction 1 1.1 Background 1 1.2 Literature Review 7 1.3 Purpose 12 1.4 Thesis Organization 12 Chapter 2 Problem Analysis 14 2.1 Sensor problem analysis 14 2.2 Vision system problem analysis 16 2.3 Camera calibration 17 2.4 Epipolar geometry 19 2.5 Feature point matching 19 Chapter 3 Method 20 3.1 Perspective projection geometry 20 3.2 Camera coordinate system 21 3.2.1 Intrinsic Parameters 22 3.2.2 Extrinsic Parameters 25 3.3 Epipolar Geometry and Fundamental Matrix 27 3.3.1 Epipolar Geometry 27 3.3.2 Essential Matrix 28 3.3.3 Using 8-points to find Fundamental Matrix 30 3.4 Stereo Vision 31 3.5 Stereo vision measurement mathematical model 33 3.6 Experimental system architecture 35 3.7 Experimental system architecture 36 3.8 The Process of Stereo Vision 37 3.9 Detection Corners using X-corner Method 38 3.10 Camera calibration 41 3.11 Capture feature points 44 3.11.1 Sobel Edge detector 44 3.11.2 Color Sobel Edge Detector 47 3.12 Feature Point Matching 48 3.12.1 Normalize Cross Correlation Coefficient Method 49 3.12.2 Relaxation method 50 3.13 Depth Calculation 53 Chapter 4 Results 54 4.1 The process of distance measurement 54 4.2 X-corners detect method 55 4.3 Camera calibration 56 4.4 Essential Matrix 59 4.5 Using 8-Points to Capture Fundamental Matrix 60 4.6 Feature points matching 64 4.7 Depth Calculation 70 Chapter 5 Conclusion and Future Work 75 5.1 Conclusion 75 5.2 Future Work 75 REFERENCES 77 Biography 83

[1] D. Marr, Vision. San Francisco : W.H. Freeman, 1982.
[2] V. Raghavan and M. Jamshidi, "Sensor Fusion Based Autonomous Mobile Robot Navigation," in System of Systems Engineering, 2007. SoSE '07. IEEE International Conference on, 2007, pp. 1-6.
[3] R. Araujo and A. T. de Almeida, "Learning sensor-based navigation of a real mobile robot in unknown worlds," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 29, pp. 164-178, 1999.
[4] R. Byeong-Soon and Y. Hyun Seung, "Integration of reactive behaviors and enhanced topological map for robust mobile robot navigation," Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, vol. 29, pp. 474-485, 1999.
[5] R. Gutierrez-Osuna, J. A. Janet, and R. C. Luo, "Modeling of ultrasonic range sensors for localization of autonomous mobile robots," Industrial Electronics, IEEE Transactions on, vol. 45, pp. 654-662, 1998.
[6] B. Marhic, E. M. Mouaddib, and C. Pegard, "A localisation method with an omnidirectional vision sensor using projective invariant," in Intelligent Robots and Systems, 1998. Proceedings., 1998 IEEE/RSJ International Conference on, 1998, pp. 1078-1083 vol.2.
[7] F. Sheng, L. Hui-ying, G. Lu-fang, and G. Yu-xian, "SLAM for mobile robots using laser range finder and monocular vision," in Mechatronics and Machine Vision in Practice, 2007. M2VIP 2007. 14th International Conference on, 2007, pp. 91-96.
[8] L. Sooyong and S. Jae-Bok, "Mobile robot localization using infrared light reflecting landmarks," in Control, Automation and Systems, 2007. ICCAS '07. International Conference on, 2007, pp. 674-677.
[9] P. Sunhong and S. Hashimoto, "Autonomous Mobile Robot Navigation Using Passive RFID in Indoor Environment," Industrial Electronics, IEEE Transactions on, vol. 56, pp. 2366-2373, 2009.
[10] Z. Yu, L. Wenfei, and H. Peisen, "Laser-activated RFID-based Indoor Localization System for Mobile Robots," in Robotics and Automation, 2007 IEEE International Conference on, 2007, pp. 4600-4605.
[11] K. Jungmin, K. Yountae, and K. Sungshin, "An accurate localization for mobile robot using extended Kalman filter and sensor fusion," in Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, 2008, pp. 2928-2933.
[12] H. Xinhan, L. Xinde, W. Zuyu, and W. Min, "Mobile Robot's Map Reconstruction Based on DSmT and Fast-Hough Self-Localization," in Information Acquisition, 2007. ICIA '07. International Conference on, 2007, pp. 590-595.
[13] A. Bais, R. Sablatnig, and J. Gu, "Single landmark based self-localization of mobile robots," in Computer and Robot Vision, 2006. The 3rd Canadian Conference on, 2006, pp. 67-67.
[14] H. Medromi, E. Zaafrani, E. Dekneuvel, and M. C. Thomas, "Multi-sensors localization for control of autonomous mobile robot," in Multisensor Fusion and Integration for Intelligent Systems, 1996. IEEE/SICE/RSJ International Conference on, 1996, pp. 373-380.
[15] H. Sun, M. Guo, and K. He, "An integrated GPS/CEPS position estimation system for outdoor mobile robot," in Intelligent Processing Systems, 1997. ICIPS '97. 1997 IEEE International Conference on, 1997, pp. 1282-1286 vol.2.
[16] S. Shi, D. Xu, and P. Wang, "A SLAM Algorithm Based on CCD Image and Odometer for Mobile Robot in Indoor Environment," in Image and Signal Processing, 2008. CISP '08. Congress on, 2008, pp. 74-78.
[17] P. KyuCheol, C. Hakyoung, C. Jongbin, and L. Jang Gyu, "Dead reckoning navigation for an autonomous mobile robot using a differential encoder and a gyroscope," in Advanced Robotics, 1997. ICAR '97. Proceedings., 8th International Conference on, 1997, pp. 441-446.
[18] C. Rafflin, M. J. Aldon, and A. Fournier, "Mobile robot trajectory learning using absolute and relative localization data," in Intelligent Vehicles '94 Symposium, Proceedings of the, 1994, pp. 526-531.
[19] M.-F. R. Lee, C. W. de Silva, E. A. Croft, and Q. M. J. Wu, "Machine vision system for curved surface inspection," Machine Vision and Applications, vol. 12, pp. 177-188, 2000.
[20] Z. Zhang, "A flexible new technique for camera calibration," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, pp. 1330-1334, 2000.
[21] Z. Zijian, L. Yuncai, and Z. Zhengyou, "Camera Calibration With Three Noncollinear Points Under Special Motions," Image Processing, IEEE Transactions on, vol. 17, pp. 2393-2402, 2008.
[22] X. Qiaoyu, Y. Dong, C. Rensheng, and H. Yan, "Accurate Camera Calibration with New Minimizing Function," in Robotics and Biomimetics, 2006. ROBIO '06. IEEE International Conference on, 2006, pp. 779-784.
[23] L. Sheng, F. Huixuan, and W. Yuchao, "Camera calibration based on divided region LS-SVM," in Mechatronics and Automation, 2008. ICMA 2008. IEEE International Conference on, 2008, pp. 488-492.

[24] Z. Zhang, "Determining the Epipolar Geometry and its Uncertainty: A Review," International Journal of Computer Vision, vol. 27, pp. 161-195, 1998.
[25] F. Dornaika, "Self-calibration of a stereo rig using monocular epipolar geometries," Pattern Recognition, vol. 40, pp. 2716-2729, 2007.
[26] R. I. Hartley, "In defence of the 8-point algorithm," in Computer Vision, 1995. Proceedings., Fifth International Conference on, 1995, pp. 1064-1070.
[27] T. M. Stephan Scholze , Frank Ade , Luc Van Gool, "Exploiting Color For Edge Extraction And Line Segment Stereo Matching In High-Resolution Aerial Imagery " International Archives of Photogrammetry and Remote Sensing, vol. Vol.XXXIII,Part B3, pp. 697-704, 2000.
[28] W. Zhongren and Q. Yanming, "An Improved Method for Feature Point Matching in 3D Reconstruction," in Information Science and Engieering, 2008. ISISE '08. International Symposium on, 2008, pp. 159-162.
[29] Z. Zhang, R. Deriche, O. Faugeras, and Q. T. Luong, "A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry," Artificial Intelligence, vol. 78, pp. 87-119, 1995.
[30] M. Tico, C. Rusu, and P. Kuosmanen, "A geometric invariant representation for the identification of corresponding points," in Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on, 1999, pp. 462-466 vol.2.
[31] C. Wei, L. Jian, and Z. Jixian, "A binocular computer vision system for aerial image pairs," in Signal Processing, 1996., 3rd International Conference on, 1996, pp. 954-957 vol.2.
[32] K. Hata and M. Etoh, "Epipolar geometry estimation and its application to image coding," in Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on, 1999, pp. 472-476 vol.2.
[33] P. H. S. Torr and A. W. Fitzgibbon, "Invariant fitting of two view geometry," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 26, pp. 648-650, 2004.
[34] F. Kangni and R. Laganiere, "Epipolar Geometry for the Rectification of Cubic Panoramas," in Computer and Robot Vision, 2006. The 3rd Canadian Conference on, 2006, pp. 70-70.
[35] T. S. Huang and O. D. Faugeras, "Some properties of the <e1>E</e1> matrix in two-view motion estimation," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 11, pp. 1310-1312, 1989.
[36] Q.-T. Luong and O. D. Faugeras, "The fundamental matrix: Theory, algorithms, and stability analysis," International Journal of Computer Vision, vol. 17, pp. 43-75, 1996.
[37] Z. G. Chen D, "A new sub-pixel detector for x-corners in camera calibration targets," 13th Intl. Conf. in Central Europe on Computer Graphics, Visualization, and Computer Vision (WSCG 2005), Plzen-Bory, Czech Republic, Jan. 31– Feb. 4, 2005, 2005.
[38] Z. Yuqian, G. Weihua, and C. Zhencheng, "Edge Detection Based on Multi-Structure Elements Morphology," in Intelligent Control and Automation, 2006. WCICA 2006. The Sixth World Congress on, 2006, pp. 9795-9798.
[39] F. y. Cui, L. j. Zou, and S. Bei, "Edge feature extraction based on digital image processing techniques," in Automation and Logistics, 2008. ICAL 2008. IEEE International Conference on, 2008, pp. 2320-2324.
[40] 鍾國亮, 影像處理與電腦視覺第三版: 東華書局,台北, 2006.
[41] J.-Y. Zhang, Y. Chen, and X.-X. Huang, "Edge detection of images based on improved Sobel operator and genetic algorithms," in Image Analysis and Signal Processing, 2009. IASP 2009. International Conference on, 2009, pp. 31-35.

QR CODE