研究生: |
張子源 Tzu-yuan Chang |
---|---|
論文名稱: |
基於全景式影像技術之自主目標搜索移動機器人 Goal Seeking System Based on Panoramic Vision Technology in Autonomous Mobile Robot |
指導教授: |
李敏凡
Min Fan Ricky Lee |
口試委員: |
郭中豐
Chung-Feng Jeffrey Kuo 陳金聖 Chin-Sheng Chen |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 自動化及控制研究所 Graduate Institute of Automation and Control |
論文出版年: | 2011 |
畢業學年度: | 99 |
語文別: | 英文 |
論文頁數: | 105 |
中文關鍵詞: | 影像黏貼 、尺度不變特徵轉換 、隨機抽樣一致性算法 、圓柱轉換 、移動式機器人 |
外文關鍵詞: | image blending, HDF, cylindrical projection |
相關次數: | 點閱:282 下載:7 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
由於機器人已經運用在各個領域,而且已經能代替人們從事危險工作,比如執行外太空探險、礦坑或下水道偵查,甚至炸彈拆除,所以機器人更需要具備多功能且能更精準的發現目標。
一般機器人使用單眼攝影機,擷取攝影機視野中的影像。或使用兩個攝影機產生立體視覺,進而判斷物體遠近。但這些都無法得到完整的環境資訊,只能得到攝影機視野中的資訊。本論文使用PTZ(pan tilt zoom)網路攝影機,先擷取各個角度的影像,並黏貼成一幅360度影像,進而搭配P3DX移動式機器人,由完整的環境資訊中找出目標物。
本論文使用高特徵描述子(HDF)來找尋前後2張影像的特徵匹配,使用隨機抽樣一致性算法(RANSAC)除去錯誤匹配,之後計算每張圖像圓柱轉換(cylindrical projection)、位置對齊(image alignment)、光影均勻化(image blending)及黏貼(stitching),重複以上步驟直到黏貼完成。
由黏貼完成的360度影像中,能夠判斷所有目標物至移動式機器人的距離,進而讓機器人自動由近至遠碰觸所有目標物。實驗結果能讓機器人無死角的偵測出所有目標物,並直接碰觸最近的目標物。
The robots have been used in various fields, and they also have replaced people’s works which are dangerous. For instance, space adventure, mine detection, and even bomb disarmed. Therefore, the robots need more multi-function and precisely sensing.
General robot use monocular camera to capture images within the camera view. Others use dual cameras to build stereo vision which detect the object distance. Those applications cannot sense all the environment information that is outside the camera view. As the result, we use PTZ (pan tilt zoom) camera to take several overlapping images in different directions. Then we stitch all the images together to form a 360 degree panorama image. By using panorama view, we can easily detect the objects that we want, and control the mobile robot to move toward the targets.
This thesis uses highly distinctive feature descriptor (HDF) based on scale invariant feature transform (SIFT) to find the matching features between two overlapping images. Then eliminate the wrong matching pairs by RANSAC, and perform the cylindrical projection, image alignment and image blending. Finally we stitch the image and find out the objects by using ellipse fitting. Once we have the objects, the robot will touch the closest one.
[1] D. Gledhill, G. Y. Tian, D. Taylor, and D. Clarke, "Panoramic imaging - A review," Computers and Graphics (Pergamon), vol. 27, pp. 435-445, 2003.
[2] S. M. Smith and J. M. Brady, "SUSAN - A new approach to low level image processing," International Journal of Computer Vision, vol. 23, pp. 45-78, 1997.
[3] Y. Zhan-Long and G. Bao-Long, "Image mosaic based on sift," in Proceedings - 2008 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2008, Harbin, 2008, pp. 1422-1425.
[4] R. Szeliski and H.-Y. Shum, "Creating full view panoramic image mosaics and environment maps," in Proceedings of the ACM SIGGRAPH Conference on Computer Graphics, Los Angeles, CA, USA, 1997, pp. 251-258.
[5] S. Peleg, M. Ben-Ezra, and Y. Pritch, "Stereo mosaicing from a single moving video camera," in Proceedings of SPIE - The International Society for Optical Engineering, San Jose, CA, 2001, pp. 98-106.
[6] Y. Deng and T. Zhang, "Generating Panorama Photos," in Proceedings of SPIE - The International Society for Optical Engineering, Orlando, FL, 2003, pp. 270-279.
[7] J. W. Hsieh, "Fast stitching algorithm for moving object detection and mosaic construction," Image and Vision Computing, vol. 22, pp. 291-306, 2004.
[8] H. Wang and K. Qin, "A global optimization approach for construction of panoramic mosaics," in Proceedings of SPIE - The International Society for Optical Engineering, Yichang, 2009.
[9] M. Doi and T. Yamamoto, "PanoVi: A Multi-Camera Panoramic Movie System by Using Client-Side Image Mosaicking," in Proceedings of the IASTED International Conference on Modelling and Simulation, Palm Springs, CA, 2003, pp. 503-508.
[10] Y. Kobayashi and Y. Kuno, "People tracking using integrated sensors for human robot interaction," in Industrial Technology (ICIT), 2010 IEEE International Conference on, pp. 1617-1622.
[11] J. C. Brailean, R. P. Kleihorst, S. Efstratiadis, A. K. Katsaggelos, and R. L. Lagendijk, "Noise reduction filters for dynamic image sequences: a review," Proceedings of the IEEE, vol. 83, pp. 1272-1292, 1995.
[12] F. Remondino and C. Fraser, "Digital camera calibration methods: considerations and comparisons.," International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,, pp. 266–272, 2006.
[13] C. Olsson, A. Eriksson, and R. Hartley, "Outlier removal using duality," in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, 2010, pp. 1450-1457.
[14] P. J. B. a. E. H. Adelson., "A multiresolution spline with application to image mosaics.," ACM Transactions on Graphics., 1983.
[15] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
[16] 尹克清, “應用視覺導航系統於小型無人飛行載具,” 自動化及控制研究所, 國立臺灣科技大學, 台北市.
[17] M. A. Fischler and R. C. Bolles, "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography," Communications of the ACM, vol. 24, pp. 381-395, 1981.
[18] A. Fitzgibbon, M. Pilu, and R. B. Fisher, "Direct least square fitting of ellipses," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, pp. 476-480, 1999.