簡易檢索 / 詳目顯示

研究生: 張子源
Tzu-yuan Chang
論文名稱: 基於全景式影像技術之自主目標搜索移動機器人
Goal Seeking System Based on Panoramic Vision Technology in Autonomous Mobile Robot
指導教授: 李敏凡
Min Fan Ricky Lee
口試委員: 郭中豐
Chung-Feng Jeffrey Kuo
陳金聖
Chin-Sheng Chen
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2011
畢業學年度: 99
語文別: 英文
論文頁數: 105
中文關鍵詞: 影像黏貼尺度不變特徵轉換隨機抽樣一致性算法圓柱轉換移動式機器人
外文關鍵詞: image blending, HDF, cylindrical projection
相關次數: 點閱:282下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

由於機器人已經運用在各個領域,而且已經能代替人們從事危險工作,比如執行外太空探險、礦坑或下水道偵查,甚至炸彈拆除,所以機器人更需要具備多功能且能更精準的發現目標。
一般機器人使用單眼攝影機,擷取攝影機視野中的影像。或使用兩個攝影機產生立體視覺,進而判斷物體遠近。但這些都無法得到完整的環境資訊,只能得到攝影機視野中的資訊。本論文使用PTZ(pan tilt zoom)網路攝影機,先擷取各個角度的影像,並黏貼成一幅360度影像,進而搭配P3DX移動式機器人,由完整的環境資訊中找出目標物。
本論文使用高特徵描述子(HDF)來找尋前後2張影像的特徵匹配,使用隨機抽樣一致性算法(RANSAC)除去錯誤匹配,之後計算每張圖像圓柱轉換(cylindrical projection)、位置對齊(image alignment)、光影均勻化(image blending)及黏貼(stitching),重複以上步驟直到黏貼完成。
由黏貼完成的360度影像中,能夠判斷所有目標物至移動式機器人的距離,進而讓機器人自動由近至遠碰觸所有目標物。實驗結果能讓機器人無死角的偵測出所有目標物,並直接碰觸最近的目標物。


The robots have been used in various fields, and they also have replaced people’s works which are dangerous. For instance, space adventure, mine detection, and even bomb disarmed. Therefore, the robots need more multi-function and precisely sensing.
General robot use monocular camera to capture images within the camera view. Others use dual cameras to build stereo vision which detect the object distance. Those applications cannot sense all the environment information that is outside the camera view. As the result, we use PTZ (pan tilt zoom) camera to take several overlapping images in different directions. Then we stitch all the images together to form a 360 degree panorama image. By using panorama view, we can easily detect the objects that we want, and control the mobile robot to move toward the targets.
This thesis uses highly distinctive feature descriptor (HDF) based on scale invariant feature transform (SIFT) to find the matching features between two overlapping images. Then eliminate the wrong matching pairs by RANSAC, and perform the cylindrical projection, image alignment and image blending. Finally we stitch the image and find out the objects by using ellipse fitting. Once we have the objects, the robot will touch the closest one.

ABSTRACT II 中文摘要 III List of Figures VIII List of Tables XI Appendix I Chapter 1 – Introduction 1 1.1 Background 1 1.2 Literature Review 2 1.3 Purpose 3 1.4 Contribution 4 1.5 Structure Configuration of Thesis 4 Chapter 2 Analysis of Panorama 6 2.1 Image Capturing 6 2.1.1 Single Camera 6 2.1.2 Multi-Cameras 7 2.1.3 Omni-Directional Camera 8 2.1.4 Stereo Camera 8 2.2 Image Processing 9 2.2.1 Noise Reduction 9 2.2.2 Camera Calibration 9 2.2.3 Feature Detection and Matching 10 2.2.4 Outlier Removal 11 2.3 Image Stitching 11 2.4 Image Blending 13 Chapter 3 –Method and Scenario 14 3.1 Equipments 15 3.1.1 Different Model of Camera 16 3.2 Taking Pictures from PTZ Camera 17 3.2.1 Control PTZ Camera 17 3.2.2 Re-Stitching Method 18 3.3 Camera Calibration 19 3.4 SIFT (Scale Invariant Feature Transform) 20 3.4.1 Extrema Detection 20 3.4.2 Elimination Unstable Key-Points 22 3.4.3 Orientation Assignment 23 3.4.4 Key-Points Descriptor 24 3.5 Highly Distinctive Feature (HDF) Descriptor 25 3.5.1 HDF Image Processing 26 3.5.2 HDF Feature Orientation 26 3.6 RANSAC (Random Sample Consensus) 27 3.7 Cylindrical Projection 29 3.8 Image Alignment 31 3.9 Image Blending 32 3.10 Stitching Image 34 3.11 Compare Different Stitching Algorithm 34 3.11.1 Compute the Homography Matrix 34 3.11.2 Perspective Transform 35 3.11.3 Stitching Images with Perspective Transform 36 3.11.4 Stitching Image with Cylindrical Projection 37 3.11.5 Compare Results 37 3.12 Recognize Objects 40 3.12.1 Gray Image 41 3.12.2 Filtering and Closing 41 3.12.3 Ellipse Fitting 42 3.13 Linear Regression 43 3.14 Angel Decision 44 3.15 Control Mobile Robot 46 Chapter 4 – Experimental Result 48 4.1 HDF Result 48 4.2 Linear Regression Data Collection 53 4.3 Experiment of Distance and Angle Accuracy 59 4.4 Mobile Robot Moving Result 64 Chapter 5 - Conclusion 66 5.1 Future Work 66 Appendix 68

[1] D. Gledhill, G. Y. Tian, D. Taylor, and D. Clarke, "Panoramic imaging - A review," Computers and Graphics (Pergamon), vol. 27, pp. 435-445, 2003.
[2] S. M. Smith and J. M. Brady, "SUSAN - A new approach to low level image processing," International Journal of Computer Vision, vol. 23, pp. 45-78, 1997.
[3] Y. Zhan-Long and G. Bao-Long, "Image mosaic based on sift," in Proceedings - 2008 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2008, Harbin, 2008, pp. 1422-1425.
[4] R. Szeliski and H.-Y. Shum, "Creating full view panoramic image mosaics and environment maps," in Proceedings of the ACM SIGGRAPH Conference on Computer Graphics, Los Angeles, CA, USA, 1997, pp. 251-258.
[5] S. Peleg, M. Ben-Ezra, and Y. Pritch, "Stereo mosaicing from a single moving video camera," in Proceedings of SPIE - The International Society for Optical Engineering, San Jose, CA, 2001, pp. 98-106.
[6] Y. Deng and T. Zhang, "Generating Panorama Photos," in Proceedings of SPIE - The International Society for Optical Engineering, Orlando, FL, 2003, pp. 270-279.
[7] J. W. Hsieh, "Fast stitching algorithm for moving object detection and mosaic construction," Image and Vision Computing, vol. 22, pp. 291-306, 2004.
[8] H. Wang and K. Qin, "A global optimization approach for construction of panoramic mosaics," in Proceedings of SPIE - The International Society for Optical Engineering, Yichang, 2009.
[9] M. Doi and T. Yamamoto, "PanoVi: A Multi-Camera Panoramic Movie System by Using Client-Side Image Mosaicking," in Proceedings of the IASTED International Conference on Modelling and Simulation, Palm Springs, CA, 2003, pp. 503-508.
[10] Y. Kobayashi and Y. Kuno, "People tracking using integrated sensors for human robot interaction," in Industrial Technology (ICIT), 2010 IEEE International Conference on, pp. 1617-1622.
[11] J. C. Brailean, R. P. Kleihorst, S. Efstratiadis, A. K. Katsaggelos, and R. L. Lagendijk, "Noise reduction filters for dynamic image sequences: a review," Proceedings of the IEEE, vol. 83, pp. 1272-1292, 1995.
[12] F. Remondino and C. Fraser, "Digital camera calibration methods: considerations and comparisons.," International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,, pp. 266–272, 2006.
[13] C. Olsson, A. Eriksson, and R. Hartley, "Outlier removal using duality," in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, 2010, pp. 1450-1457.
[14] P. J. B. a. E. H. Adelson., "A multiresolution spline with application to image mosaics.," ACM Transactions on Graphics., 1983.
[15] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
[16] 尹克清, “應用視覺導航系統於小型無人飛行載具,” 自動化及控制研究所, 國立臺灣科技大學, 台北市.
[17] M. A. Fischler and R. C. Bolles, "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography," Communications of the ACM, vol. 24, pp. 381-395, 1981.
[18] A. Fitzgibbon, M. Pilu, and R. B. Fisher, "Direct least square fitting of ellipses," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, pp. 476-480, 1999.

QR CODE