簡易檢索 / 詳目顯示

研究生: 蔡浩葦
Hao-Wei Tsai
論文名稱: 利用RGB-D相機的單一相機定位與建圖
MonoSLAM based on RGB-D Camera
指導教授: 高維文
Wei-Wen Kao
口試委員: 陳亮光
Liang-Kuang Chen
李敏凡
Min-Fan Lee
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 中文
論文頁數: 70
中文關鍵詞: 測距相機同時定位與建圖卡爾曼濾波器
外文關鍵詞: RGB-D Camera, SLAM, Kalman Filter
相關次數: 點閱:315下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 單一相機定位法稱為「MonoSLAM」,雖然使用單一相機的MonoSLAM技術已有相當時間的發展,但在傳統的MonoSLAM理論中因為無法取得正確的特徵點深度資訊,只能假設或使用其他方法推導出其大概的深度距離,因此會影響在做狀態估測時的收斂速度及精確性。
    本研究是以測距相機(Microsoft Kinect)作為環境感知感測器,將影像特徵點搭配深度資訊結合,進一步提升影像特徵點匹配的正確率,使得到連續位移影像圖裡相同存在的特徵點,而藉由這些特徵點的位置與相機本身位置的變化,來完成同時定位與建圖(SLAM)的技術,另一方面透過利用測距相機所得到的深度資訊,可以簡化使用擴展式卡門濾波器理論中的狀態方程與其計算時間,並且改善傳統MonoSLAM方法中因影像特徵點無深度資訊造成的定位誤差。


    Positioning using single camera is called “MonoSLAM”, although MonoSLAM has been developed for a period of time, traditional MonoSLAM theorem can only assume or use other approaches to derive an approximated depth distance due to incapability of acquiring correct depth information of feature points, therefore affecting the convergence speed and accuracy of state estimation.

    This study used Microsoft Kinect as environmental perception sensor to collect depth information, combining with image feature points further improves the accuracy of matching image feature points, obtaining identical feature points existing in a series of images of a continuous movement, achieving SLAM by the variation of the relative position between these feature points and the camera itself, on the other hand using depth information from the depth sensor simplifies the state equation in the Extend Kalman Filter and reduces computational time, improving the positioning error in traditional MonoSLAM caused by the lack of feature point’s depth information.

    摘要 I Abstract II 誌謝 III 目錄 IV 圖索引 VI 表索引 VIII 第一章 緒論 1 1.1前言 1 1.2研究動機與方法 1 1.3文獻回顧 2 1.4論文架構 3 第二章 SLAM理論背景 4 2.1 SLAM基礎介紹 4 2.2卡爾曼濾波器 4 2.3擴展式卡爾曼濾波器[22,23] 5 2.4系統方程式 9 2.4.1狀態方程式(State equation) 9 2.4.2量測方程式(Measurement equation) 11 2.5特徵點之影像量測 12 2.6 Jacobian矩陣計算 13 第三章 影像特徵點擷取與比對 17 3.1特徵點比對 17 3.2 SURF(Speed-Up Robust Features)特徵點偵測法 18 3.3影像特徵點除錯 20 3.3.1深度過濾 20 3.3.2 隨機抽樣一致演算法RANSAC 20 第四章 系統開發環境與架構 22 4.1硬體介紹 23 4.2 Kinect測距原理與限制 23 4.2.1光散斑測距原理[24][25] 24 4.2.2 Light Coding限制 25 4.3軟體開發環境 25 第五章 定位實驗與結果 26 5.1實驗流程 26 5.2實驗分析 38 5.3實驗數據 38 5.4實驗討論 57 第六章 結論與未來展望 58 6.1結論 58 6.2個人想法與建議 58 6.3未來展望 59 參考文獻 60

    [1] H. Durrant-Whyte and T. Bailey,” Simultaneous Localisation and Mapping (SLAM):Part I The Essential Algorithms” ,Robotics and Automation Magazine, pp.99-110, June, 2006.
    [2] T. Bailey and H. Durrant-Whyte, “Simultaneous Localisation and Mapping (SLAM):Part II State of the Art,” Robotics and Automation Magazine, pp. 108-117,September, 2006.
    [3] T. Bailey, “Mobile Robot Localization and Mapping in Extensive Outdoor Environments,” PhD Thesis, The University of Sydney, Australian Centre forField Robotics, 2002.
    [4] J. M. M Montiel, J. Civera, and A. J. Davison, “Unified Inverse Depth Parametrization for Monocular SLAM,” Robotics Science and Systems, RSS, Philadelphia, 2006
    [5] P. Pinies, T. Lupton, S. Sukkarieh, and J. Tardos, “Inertial Aiding of Inverse Depth SLAM using a Monocular Camera,” IEEE Int. Conference on Robotics and Automation, ICRA, 2007.
    [6] G. Farley and M. Chapman, “An Alternate Approach to GPS Denied Navigation based on Monocular SLAM Techniques,” ION National Technical Meeting, 2008.
    [7] S. Sukkarieh, E. M. Nebot, and H. Durrant-White, “A High Integrity IMU/GPS Navigation Loop for Autonomous Land Vehicle Applications,” IEEE Trans. Robot. Automation, vol. 15, pp. 572-578, September 1999.
    [8] 周祐行,整合雷射測距儀與視訊於行走型機器人路程規劃之研究,長庚大學機械工程研究所碩士論文,2007
    [9] 翁茂耘,使用掃描式雷射測距儀於移動式機器人之地圖建構, 長庚大學機械工程研究所碩士論文,2008
    [10] R. G. Brown and P. Y . C. Hwang,《Introduction to Random Signals and Applied Kalman Filtering》, John Willey&Sons, 3rd Ed., 1997.
    [11] G. Welch and G. Bishop, “An Introduction to the Kalman Filter,” UNC-Chapel Hill, TR 95-041, March 11, 2002.
    [12] J. Kim and S. Sukkarieh, “Airborne Simultaneous Localisation and Map Building,” IEEE Int. Conf. on Robotics and Automation, Taipei, Taiwan, September 2003.
    [13] S. J. Julier and J. K. Uhlmann, “A New Extension of the Kalman Filter to Nonlinear Systems,” SPIE AeroSense Symposium, Orlando, FL., April 21–24,.1997.
    [14] 黃富聖,基於全向式影像之機器人同步定位與環境地圖建立,國立交通大學電機與控制工程學系碩士論文,2008
    [15] 王兆戊,全向式移動機器人之同步定位與環境地圖建立, 國立交通大學電機與控制工程學系碩士論文,2008
    [16] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, McGraw-Hill, 1995.
    [17] Z. Zhang, “Flexible Camera Calibration by Viewing A Plane from Unknown Orientation,” 7th IEEE Int. Conf. Computer Vision, pp. 666–673, 1999.
    [18] J. Heikkila and O. Silven, “A Four-step Camera Calibration Procedure with Implicit Image Correction,” Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 1106-1112, 1997.
    [19] D. G. Lowe. “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
    [20] H. Bay, A. Ess, T. Tuytelaars and L. V. Gool, “SURF: Speeded-Up Robust Features”, Computer Vision and Image Understanding, vol.110, pp.346-359, 2008.
    [21] D. G. Lowe, “Distinctive image features from scale-invariant keypoints, ” International Journal of Computer Vision,2004
    [22] 張宇宏,「基於顏色與SURF特徵之球場廣告看板內容辨識與計次系統」,碩士論文,國立台北科技大學,台北 2011

    QR CODE