簡易檢索 / 詳目顯示

研究生: 林姝廷
Shu-Ting Lin
論文名稱: 基於單中心圓柱全景影像之室內定位與建圖
Indoor Simultaneous Localization and Mapping Based on Single-center Cylindrical Panoramas
指導教授: 高維文
Wei-Wen Kao
口試委員: 黃緒哲
Shiuh-Jer Huang
林紀穎
Chi-Ying Lin
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 56
中文關鍵詞: 圓柱全景影像電腦視覺影像處理影像縫合同步定位與建圖
外文關鍵詞: cylindrical panorama, computer vision image processing, image stitching, SLAM
相關次數: 點閱:394下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇論文是藉由單一相機進行全景影像之建立,並利用此影像達到定位與建圖之目的。首先在全景圖建立的部分,我們使用一台網路攝影機於定點進行八個方位的拍攝,透過影像特徵比對、影像對位、影像縫合技術,將影像合成一張圓柱式全景圖。在定位與建圖的部分,本文採用搭配擴展式卡爾曼濾波器的 SLAM 算法,從全景影像得到周圍 360 度的世界環境資訊,藉由自身與周遭特徵的方位關係,進而估測出自身的位置與環境的實際位置。
    全景影像具有 360 度的視角,除了能取得更多的環境特徵外,也能持續的追蹤特徵,解決單一相機在直線前進時,因影像特徵的角度差較小造成觀測品質不佳,或是過大的移動造成特徵點丟失的狀況。本論文的單中心圓柱全景圖,利用一般消費型網路攝影機進行取像,與全方位攝影機、魚眼相機比較,擁有較低的開發成本,且其所得之全景影像因扭曲較少,接近人眼所視之真實景象,也能進一步應用在目前正蓬勃發展中的虛擬實境系統。


    In this thesis we create the panoramic image through a single camera, and use this image to achieve the purpose of localization and mapping. In the panorama generating part, we first use a webcam to capture images of eight directions at a position, and then multiple images are synthesized into a panorama using feature matching、warping and stitching technology. The composite image is called single-center cylindrical panorama. In the localization and mapping part, we obtain environmental features around 360 degrees from the panoramic image, and we use the Extended Kalman filter SLAM algorithm, estimate its own position and the actual location of the environmental characteristics.
    With a 360-degree field of view, we can obtain more environmental features and track features constantly from panoramic images. This also makes the SLAM system more steadily. In this thesis, our single-center cylindrical panoramas, which use a consumer webcam for image acquisition, have lower development costs, compare with omnidirectional camera and fisheye camera. In addition, cylindrical panorama is similar to the human eye to see the real world because of fewer distortions. It can be further applied to the virtual reality system.

    目 錄 摘要 I Abstract II 誌 謝 III 目 錄 IV 圖索引 VI 表索引 IX 第一章 緒論 1 1.1 前言 1 1.2 研究方法與目的 1 1.3 文獻回顧 2 1.4 論文架構 3 第二章 全景圖之影像處理 4 2.1 全景影像概述 4 2.2 相機幾何原理[15] 6 2.2.1 相機模型 6 2.2.2 投影變換(projective transformation) 7 2.2.3 相機校正 8 2.3 影像特徵匹配 11 2.3.1 SURF特徵點 11 2.3.2 特徵匹配 12 2.4 影像合成(compositing) 15 2.4.1 柱面投影(cylindrical projection)[19] 15 2.4.2 影像混融(blending) 18 第三章 視覺定位與建圖 19 3.1 SLAM基礎介紹 19 3.2 擴展式卡爾曼濾波器 20 3.3 系統模型 22 3.3.1 狀態模型 23 3.3.2 觀測模型 25 第四章 實驗結果與分析 27 4.1 實驗設備 27 4.2 實驗流程 29 4.3 路徑估測與建圖 36 4.3.1 實驗路徑一:矩形路徑 36 4.3.2 實驗路徑二:圓形路徑 43 4.4 結果討論 50 第五章 結論與未來展望 52 5.1 結論 52 5.2 想法與建議 53 5.3 未來展望 53 參考文獻 54

    參考文獻
    [1] R.C. Smith and P. Cheeseman, “On the Representation and Estimation of Spatial Uncertainty”, The International Journal of Robotics Research, pp. 56~68, 1986.
    [2] R.C. Smith, M. Self and P. Cheeseman, “Estimating Uncertain Spatial Relationships in Robotics”, Autonomous Robot Vehicles, pp. 167-193, 1990.
    [3] J.J. Leonard, H. Durrant-Whyte, “Simultaneous Map Building and Localization for an Autonomous Mobile Robot”, Proceedings of IEEE/RSJ International Workshop on IROS, pp. 1442-1447, 1991.
    [4] H. Durrant-Whyte and T. Bailey, “Simultaneous Localization and mapping: Part I.”, Robotics and Automation Magazine, pp.99-110, June, 2006.
    [5] T. Bailey and H. Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM): Part II”, Robotics and Automation Magazine, pp. 108-117, September, 2006.
    [6] C.G. Harris and M. Stephens, “A combined corner and edge detector.” ,Proceedings of the 4th Alvey Vision Conference, pp. 147-151, 1988.
    [7] J. Shi and C. Tomasi, “Good Features to Track”, 9th IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
    [8] D.G. Lowe, “Object Recognition from Local Scale-Invariant Features”, Proceedings of the International Conference on Computer Vision, pp. 1150-1157, 1999.
    [9] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, pp. 91-110, 2004.
    [10] H. Bay, T. Tuytelaars, and L.V. Gool, “Surf: Speeded-up robust features.”, Computer Vision and Image Understanding (CVIU), Vol.110, No.3, pp. 346-359, 2008.
    [11] J.M.M. Montiel, J. Civera, and A.J. Davison, “Unified Inverse Depth Parametrization for Monocular SLAM”, Robotics Science and Systems, RSS, Philadelphia, 2006.
    [12] S.Y. Hwang and J.B. Song, “Monocular Vision-Based SLAM in Indoor Environment Using Corner, Lamp, and Door Features From Upward-Looking Camera”, Transactions On Industrial Electronics, Vol.58, No.10, pp. 4804-4812, October, 2011.
    [13] Y. Yagi, Y. Nishizawa, and M. Yachida, “Map-based navigation for a mobile robot with omnidirectional image sensor COPIS”, IEEE Trans. Robot Automat, Vol.11, pp. 634-648, October, 1995.
    [14] J. Gaspar, N. Winters, and J. Santos-Victor, “Vision-based navigation and environmental representations with an omnidirectional camera”, IEEE Transactions on Robotics and Automation, Vol.16, No.6, pp. 890-898, December 2000.
    [15] R. Hartley and A. Zisserman, “Multiple View Geometry in Computer Vision”, New York, USA: Cambridge University Press, 2 ed., 2003.
    [16] Z. Zhang, “A flexible new technique for camera calibration”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.22, pp. 1330-1334, Nov. 2000.
    [17] M. Muja and D.G. Lowe, “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”, International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, Feb. 2009.
    [18] M. Fischler and R. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Comm. Of the ACM, pp. 381-395, June 1981.
    [19] R. Szeliski, Computer Vision: Algorithms and Applications (Texts in Computer Science). Springer-Verlag London Limited, 2011.
    [20] P. Pérez, M. Gangnet and A. Blake, “Poisson image editing”, ACMTrans. Graph, pp. 313-318, 2003.

    QR CODE