簡易檢索 / 詳目顯示

研究生: 任紹棟
Shau-dong Ren
論文名稱: 具立體視覺測距之移動機器人
A Mobile Robot with Stereo vision Range Estimation
指導教授: 施慶隆
Ching-Long Shih
口試委員: 劉昌煥
Chang-Huan Liu
許新添
Hsin-Teng Hsu
李文猶
Wen-Yo Lee
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2007
畢業學年度: 95
語文別: 中文
論文頁數: 90
中文關鍵詞: 移動式機器人影像伺服立體視覺
外文關鍵詞: mobile robot, visual servo, stereo vision
相關次數: 點閱:392下載:19
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文之主要研究目標是建立移動式機器人雙眼視覺系統,使機器人具有三維空間的距離感測能力。透過校正找出兩個相機的內部矩陣及外部參數,可以得知相機的內部幾何關係以及空間中的擺放位置。 利用雙眼視覺的影像處理找出具有相同特徵點的目標物,再根據極線幾何限制條件檢查是否為相同點。然後利用三角成像的原理將目標物的三維座標位置計算出來,並經由座標系統的轉換計算出目標物與機器人間的距離。最後成功的控制機器人將目標物夾取並其放入放置桶內。


The purpose of this thesis it to build a binocular vision system for a mobile robot such that the robot has 3-dimension range detection ability. By calibrating the intrinsic and the extrinsic parameters of two cameras, one can acquire inside geometry information and spatial relationship of the cameras. The binocular image processing system is used to find out the same feature points of objects, and then to check whether they are the same or not based on the epipolar geometry constrain. The next step is to calculate the 3-dimention coordinate of the object by the triangular perspective theory, and to compute the distance between the robot and the object. Finally, the robot successfully grabs the object and then puts it into a target can.

摘要 I Abstract II 誌謝 III 目錄 IV 圖表索引 VIII 第一章 緒論 1 1.1前言 1 1.2文獻回顧 3 1.3研究目的與動機 6 1.4論文架構 6 第二章 自走車系統架構 7 2.1系統架構 7 2.2車體結構 8 2.2.1電源電路 10 2.2.2控制核心-dsPIC30F3013單晶片 12 2.2.3影像裝置 13 2.2.4夾具設計 14 2.3軟體系統 17 2.4座標系統轉換 19 第三章 雙眼視覺原理 22 3.1相機成像模型 22 3.1.1透視投影 22 3.1.2相機參數 23 3.2立體視像 27 3.3極線幾何與基礎矩陣 29 3.3.1極線幾何限制 30 3.3.2基礎矩陣 32 3.3.3基礎矩陣測試結果 33 3.4雙眼測距 35 3.4.1雙眼視覺求出三維座標 35 3.4.2相機擺放位置角 38 第四章 電腦視覺與影像處理 41 4.1影像擷取 41 4.2彩色空間轉換 42 4.2.1 RGB彩色空間 42 4.2.1 HSV彩色空間 43 4.3形態學(morphology) 45 4.3.1膨脹與侵蝕 45 4.3.2開放及封閉運算 46 4.3.4影像測試結果 47 4.4影像平滑濾波 49 4.5影像邊緣偵測 50 4.6霍夫轉換法 52 4.6.1 霍夫直線偵測 52 4.6.2霍夫隨機測圓法 53 第五章 實驗結果 56 5.1影像測圓流程 56 5.2雙眼視覺校正 61 5.3極線幾何限制 63 5.4距離計算 66 5.5移動決策 67 第六章 結論與建議 76 6.1結論 76 6.2 建議 77 參考文獻 78 附錄 81 作者簡介 83

[1] R. Y. TSAI, “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-shelf TV Cameras and Lenses,” IEEE Journal of Robotics and Automation, Volume 3, Issue 4, Aug 1987 Page(s):323 – 344.
[2] G. N. Desouza and A. C. Kak, "Vision for Mobile Robot Navigation: a Survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, pp. 237-267, Feb. 2002.
[3] M. Bertozzi and A. Broggi, “GOLD: a Parallel Real-time Stereo Vision System for Generic Obstacle and Lane Detection,” IEEE Transactions on Image Processing, Vol. 7, No. 1, pp. 62-81, Jan. 1998.
[4] S. Hutchinson, G. D. Hager, and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol. 12, No.5, 1996, pp.651-670.
[5] Murray D, Little J., “Using Real-time Stereo Vision for Mobile Robot Navigation,” Autonomous Robots, 2000, 8(2):161-171.
[6] Rodney A. Brooks., “A Robust Layered Control System for a Mobile Robot,” IEEE Journal of Robotics and Automation, RA-2:14-23, 1987.
[7] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, and P. Mahoney. “Stanley, the Robot that Won the DARPA Grand Challenge, ” Journal of Field Robotics, 23(9), 661–692 (2006).
[8] Hartley, R., “In Defence of the 8-point Algorithm”, Proceedings of the 5th International Conference on Computer Vision, IEEE Computer Society Press, Boston, MA, pp. 1064–1070.
[9] J. H. Jean, T. P. Wu, J. H. Lai, and Y. C. Huang, “A Visual Servo System for Object Tracking Applications of Mobile Robots Based on Shape Features,” Proceedings of 2005 CACS Automatic Control Conference Tainan, Taiwan, Nov 18-19, 2005.
[10] J. S. Gutmannn, M. Fukuchi, and M. Fujita, “Stair Climbing for Humanoid Robots Using Stereo Vision,” Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 28-October 2, 2004, Sendai, Japan.
[11] M. Van Winnendael, G. Visenti, R. Bertrand, R. Rieder, “Nanokhod Microrover Heading towards Mars”, Proceedings of Fifth International Symposium on Artificial Intelligence Robotics and Automation in Space (ESA SP-440), pp.69-76, Noordwijk, 1999.
[12] D. Kragic, and H. I. Christensen, “Tracking Techniques for Visual Servoing Tasks”, Proc. IEEE International Conference on Robotics and Automation, pp. 1663-669, April, 2000.
[13] Y. Ma, J. Kosecka, and S. Sastry, “Vision Guided Navigation for a Nonholonomic Mo-bile Robot ”, IEEE Trans. On Robotics and Automation, August, 1998.
[14] K. Hirai, M. Hirose, Y. Haikawa and T. Takenaka , “The Development of Honda Humanoid Robot,” Proceedings of IEEE International Conference on Robotics & Automation, pp.1321-1326, May. 1998.
[15] K. P. Horn, B. G. Schunck, “Determining Optical Flow,” Artifical Intelligence, Vo1.17, pp. 185-203, 1981.
[16] A. Lipton, H. Fujiyoshi and R. Patil, “Moving Target Classification and Tracking from Real-time Video, ” In Proc. IEEE Workshop on Applications of Computer Vision, Princeton, NJ, 1998.
[17] 柳高陵, “以視覺為基礎之小型人形機器人階梯步行” 碩士論文, 國立台灣科技大學, 民國93年。
[18] http://sourceforge.net/projects/opencvlibrary/
[19] Ramesh Jain, Rangachar Kasturi, Brian G. Schunck, “Machine Vision”,New York: McGraw-Hill, 1995.
[20] D. Forsyth and J. Ponce, “Computer Vision-A Modern Approach,” Prentice Hall, 2003.
[21] 鍾國亮,"影像處理與電腦視覺",台北市,東華書局,民國91年。

QR CODE