簡易檢索 / 詳目顯示

研究生: 邱敬洲
JING-JHOU CIOU
論文名稱: 以深度影像資料庫為基礎的嵌入式全向輪機器人同步定位與建圖
Embedded Omni-Directional Wheeled Mobile Robot Visual SLAM based on Depth Image Database
指導教授: 高維文
Wei-wen Kao
口試委員: 陳亮光
Liang-kuang Chen
李敏凡
Min-fan Lee
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 中文
論文頁數: 102
中文關鍵詞: 三輪全向輪無人機器載具影像特徵點最近點迭代法測距相機點雲同步影像資料庫建立與SLAM定位
外文關鍵詞: Image Feature Points, RGB-D Camera, Iterative Closest Point, Synchronously Construct an Image Database and Ac, Point Cloud, Tri-Omni-Directional Wheeled Mobile Robot
相關次數: 點閱:280下載:16
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在一個全然陌生的環境中,載具位置與環境的描述均為未知,唯有依靠環境中固定位置的特徵在載具移動的過程中載具感測器量測顯現的變化,以判定載具本身的位置與環境的模型。但是影像特徵點受環境照明的影響非常大,同一環境在不同時間或不同光線條件下取得的影像經特徵點辨識運算後可能會產生不同之特徵點,因此在定位過程結束後特徵點位置的資訊並無法真正描述環境或被再次利用。
    為解決上述問題,本論文將以嵌入式系統於自製三輪全向輪無人機器載具利用RGB-D測距相機(Microsoft Kinect)在未知環境以影像特徵點及點雲比對為基礎的同步影像資料庫建立與SLAM定位實現。測距相機的影像與深度資訊,使得可以同時考慮影像特徵點與點雲深度量測值擬合的最近點迭代法(ICP)計算出相對位移並同時連結點雲,建立環境模型。
    影像資料庫建立後,利用無人機器載具搭載三種不同的感測器進行定位:
    1.測距相機:原感測器繼續在此環境運作,使用相同方法測得相對位移。
    2.一般相機:雖無法量測深度,仍可利用即時影像與歷史影像,進行特徵點搜尋。由於歷史影像已儲存深度資訊,因此仍可藉由特徵點之深度資訊進行兩不同影像拍攝位置間之相對位移計算。
    3.雷射測距儀:與所有點雲資料進行ICP運算,解算結果即為相對位移。
    經實驗的驗證,未來使用者不管穿戴上述任一種感測器,所計算出的相對位移都可作為SLAM定位之量測量同步修正自身位置及更新影像庫歷史影像之拍攝位置,達成在已有環境模型下實現自身定位並持續更新資料庫精度的目標。


    When in a totally strange environment, with unknown position of the vehicle and no descriptions of the environment, the estimation of the vehicle’s position and the reconstruction of the model of the environment can only rely on the variation of the measured distance between the moving vehicle and the fixed feature in the environment, given by the on-board sensors. However, image features are sensitive to illumination conditions of the environment, as the images taken in the same environment at different times under different illumination conditions may result in different feature points selected for each image after feature point recognition operation, therefore the position information of the feature points cannot truly describe the environment and can be used no more after the positioning process.
    To solve the problem mentioned earlier, this thesis used homemade an unmanned Tri-Omni-Directional Wheeled Mobile Robot carrying an embedded system that uses RGB-D camera(Microsoft Kinect)to synchronously construct an image database and achieve SLAM based on image feature points matching and point cloud matching in an unknown environment. The image and depth information provided by the RGB-D camera allows us to consider fitting both the image feature points and the point cloud depth measurements by Iterative Closest Point(ICP)to compute the relative displacement at the same time, while performing stitching the point clouds together to build the model of the environment.
    After the image database was established, positioning can be done by using three different sensors onboard the unmanned robot:
    1. RGB-D camera:Still working in the same environment, using the same method to measure relative displacement.
    2. Ordinary camera:Although depth information cannot be obtained using ordinary camera, feature points can still be found using real-time image and historical image. Since historical images contain depth information, relative displacement between two images taken in different locations can still be computed using the depth information of the feature points.
    3. Laser range finder:Perform ICP to all data contained in the point cloud, the result is the relative displacement.
    Vertified by experiments, future users carrying any of the three sensors mentioned above can synchronously revised the position of themselves and update the location where the historical images are taken in the image database through SLAM using the computed relative displacement as measurements, achieving the subjective of self-positioning and constantly updating the accuracy of the database in known environment model.

    目錄 摘要 I Abstract II 誌謝 IV 目錄 V 圖索引 VIII 表索引 XIII 第一章 緒論 1 1.1 前言 1 1.2 研究動機與方法 1 1.3 文獻回顧 3 1.4 論文架構 4 第二章 RGB-D測距相機-光散斑測距 6 2.1 傳統測距方式 6 2.2 光散斑測距原理 7 2.3 Light Coding限制 9 2.4 點雲(Point Cloud) 10 第三章 影像特徵點擷取與比對-SIFT 12 3.1 影像比對 12 3.2 特徵點比對 13 3.3 SIFT演算法[5] 14 3.3.1 尺度空間上的極值檢測 15 3.3.2 特徵點的定位 17 3.3.3 為特徵點標定方向 19 3.3.4 提取特徵點的描述符 20 3.4 基於RANSAC演算法的特徵匹配 22 3.5 實際特徵點匹配情況 24 第四章 三維空間散亂雜訊點濾波 29 4.1 劃分空間區域 29 4.2 Histogram of 3D data 32 4.3 實際濾波使用情況 33 第五章 ICP演算法理論 34 5.1 ICP演算法原理[20][36] 34 5.1.1 設定data shape與model shape的資料格式 35 5.1.2 對D裡的每一個資料點,尋找在M中距離最近的對應點 36 5.1.3 計算幾何轉換矩陣 36 5.1.4 更新座標位置 38 5.1.5 計算均方根誤差與判斷停止條件 38 5.2 基於KD-Tree的最近鄰點搜尋法[37-40] 39 5.3 特徵點(RGB-D)加3D點雲的ICP演算 43 第六章 三輪全向輪機器人載具運動模型 45 6.1 三輪全向輪運動模型[41,42] 45 第七章 SLAM系統模型 49 7.1 擴展式卡爾曼濾波器(The Extended KF)[44,45] 49 7.2 系統方程式 52 第八章 系統開發環境與研究架構流程 55 8.1 系統硬體架構 55 8.2 系統電路規劃 66 8.3 機器人載具消耗功率計算 67 8.4 研究架構流程 67 第九章 同步影像資料庫建立與SLAM實驗 71 9.1 實驗環境 71 9.2 同步影像資料庫建立與SLAM定位實驗流程 73 9.2.1 第一張影像與其深度資訊當作第一組影像資料庫 73 9.2.2 第四張影像與其深度資訊當作第二組影像資料庫 80 9.3 同步影像資料庫建立與SLAM定位實驗結果 87 9.4 實驗討論 92 第十章 結論與未來展望 93 10.1 結論 93 10.2 個人想法與建議 94 10.3 未來展望 95 參考文獻 96

    [1]N. Ho and R. Jarvis, “Large Scale 3D Environmental Modelling for Stereoscopic Walk-Through Visualisation.”, in 3DTV Conference, Kos Island, Greece, May 2007.
    [2]R. Jarvis, N. Ho, and J. Byrne, “Autonomous robot navigation in cyber and realworlds.”, In CW ’07: Proceedings of the 2007 International Conference on Cyberworlds, pages 66-73, Washington, DC, USA, 2007.
    [3]T. Suzuki, M. Kitamura, Y. Amano and T. Hashizume, “6-DOF Localization for a Mobile Robot using Outdoor 3D Voxel Maps.”, Proc. of the 2010 IEEE International Conference on Intelligent Robots and Systems(IROS 2010), pp. 5737-5743, 2010.
    [4]J. Shi, and C. Tomasi, “Good Features to Track.”, Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, Seattle, USA, pp. 593-600, 1994.
    [5]D. G. Lowe, “Distinctive image features from scale-invariant keypoints.”, International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, November 2004.
    [6]H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded-up robust features.”, Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346-359, 2008.
    [7]A. J. Davison and D. W. Murray, “Simultaneous Localisation and Map-Building Using Active Vision.”, IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 865-880, July 2002.
    [8]A. J. Davison, “Real-time Simultaneous Mapping And Localization with a Single Camera.”, Pro. International Conference on Computer Vision, Nice, October 2003.
    [9]A. J. Davison, I. D. Reid, N. D. Molton and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM.”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp. 1052-1067, June 2007.
    [10]H. Durrant-Whyte and T. Bailey, “Simultaneous Localization and mapping (SLAM): Part I the Essential Algorithms.”, Robotics and Automation Magazine, pp. 99-110, June, 2006.
    [11]T. Bailey and H. Durrant-Whyte, “Simultaneous Localisation and Mapping (SLAM): Part II State of the Art.”, Robotics and Automation Magazine, pp. 108-117, September, 2006.
    [12]P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using depth cameras for dense 3d modeling of indoor environments.”, In Proc. of International Symposium on Experimental Robotics (ISER), 2010.
    [13]N. Fioraio and K. Konolige. “Realtime visual and point cloud slam.”, In Proc. of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics: Science and Systems Conf. (RSS), 2011.
    [14]Smith, R.C. and Cheeseman, P., “On the Representation and Estimation of Spatial Uncertainty.”, The International Journal of Robotics Research. (IJRR), vol.5, no.4, pp. 56-68, 1986
    [15]Smith, R.C., Self, M.;Cheeseman, P., “Estimating Uncertain Spatial Relationships in Robotics.”, USA: Elsevier. 1986: pp. 435-461.
    [16]H. P. Morevec, “Towards automatic visual obstacle avoidance.”, Proceedings of 5th International Joint Conference on Artificial Intelligence, pp. 584,1977.
    [17]C. Harris and M. Stephens, “A combined corner and edge detector.”, In Alvey Vision Conference, pp. 147-151, 1988
    [18]Motilal Agrawal, Kurt Konolige and Luca Iocchi, “Real-time detection of independent motion using stereo.”, In Proc. of the IEEE Workshop on Motion and Video Computing, pp. 207-214, Jan. 2005.
    [19]Martin A. Fischler and Robert C. Bolles, “Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography.”, Comm. of the ACM, pp. 381-395, June 1981.
    [20]P. J. Besl and N. D. McKay, “A method for registration of 3D shapes.”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No.2, pp. 239-254., 1992
    [21]S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm.”, in Proceedings of the Third Intl. Conf. on 3D Digital Imaging and Modeling, pp. 145-152, 2001.
    [22]D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The trimmed iterative closest point algorithm.”, In Proc. International Conf. on Pattern Recognition, Quebec, Canada, 2002.
    [23]S. Pu, “Generating Building Outlines From Terrestrial Laser Scanning.”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B5, Beijing 2008.
    [24]洪祥恩,“以地面及空載光達點雲重建複雜物三維模型”,國立中央大學土木工程學系碩士論文,桃園,2011。
    [25]Javier Garcia, Valencia(ES), Zeev Zalevsky, Ha’ayin(IL), “Range Mapping Using Speckle Decorrelation.”, Applied Optics, NO.7, Oct, 2008. U.S. Appl. No. US 7, 433, 024 B2
    [26]Alexander Shpunt,Petach-Tikva(IL), “Optical Designs for Zero Order Reduction”, Applied Optics,NO.23,Jul,2009.U.S.Appl.NO.US 2009/0185274
    [27]S Birchfield, and C Tomasi.,“Depth discontinuities by pixel-to-pixel stereo.”, The 6th Int'l Conf on Computer Vision, 1073∼1080, 1998.
    [28]Kweon J J, Kang D K, and Kim S D., “A Stereo Matching Algorithm Using Line Segment Features.”, The Conference of TENCON '89, Bombay, 1989.
    [29]Lowe, D.G., “Object recognition from local scale-invariant features.”, In International Conference on Computer Vision, Corfu, Greece, pp. 1150-1157, 1999.
    [30]Koenderink, J.J., “The structure of images.”, Biological Cybernetics, 50:363-396, 1984.
    [31]Lindeberg, T., “Scale-space theory: A basic tool for analysing structures at different scales”, Journal of Applied Statistics, 21(2):224-270, 1994.
    [32]Krystian MikolajczyY Cordelia Schmid, “Indexing based on scale invariant interest points.”, International Conference on Computer Vision, pp. 525-531, 2001
    [33]Hui-Pu, Xiao-Feng, Yuan, “Filtering of Scattered 3D Data Points In Reverse Engineering.”, Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006.
    [34]Mesn Adane Dema, “3D Reconstruction for Ship-Hull Inspection.”, A Thesis Submitted for the Degree of MSc Erasmus Mundus in Vision and Robotics (VIBOT), 2009.
    [35]Radu Bogdan RUSU, “Fast 3D Recognition and Pose.Viewpoint Feature Histogram.”, November 2, 2010.
    [36]蔣柏笙,“平面影像之對齊與相似度評估”,國立中央大學機械工程學系碩士論文,桃園,2005。

    [37]J.L. Bentley, “Multidimensional Binary Search Trees Used for Associative Searching.”, Communications of the ACM, 18 (1975), pp. 509-517.
    [38]Lee, D. T., Wong, C. K.., “Worst-case analysis for region and partial region searches in multidimensional binary search trees and balanced quad trees.”,
    Acta Informatica 9 (1): 23–29, 1977.
    [39]Hans Martin Kjer and Jakob Wilm., “Evaluation of surface registration algorithms for PET motion correction.”, Kongens Lyngby 2010
    [40]A. W. Moore, Nov., “Efficient Memory-Based Learning for Robot Control.”, University of Cambridge Computer Laboratory, University of Cambridge, 1990
    [41]W.K. Loh K.H. Low, Y.P. Leow, “Mechatronics Design and Kinematic Modelling of a Singularityless Omni-Directional Wheeled Mobile Robot.”, Pmceedings of the 1003 IEEE Inleroslional Confersoce on Robotics & Automation Taipei, Taiwan, September 14-19, 1003
    [42]楊智翔,“導航機器人之研製”,國立中央大學資訊工程學系碩士論文,桃園,2013。
    [43]Luo Juan, Oubong Gwun, “A Comparison of SIFT, PCA-SIFT and SURF.”, International Journal of Image Processing (IJIP) Volume(3), Issue(4), pp. 143-152, oct. 2009
    [44]R. G. Brown and P. Y. C Hwang, “Introduction to Random Signals and Applied Kalman Filtering.”, John Wiley&Sons, New York, 3rd 1997.
    [45]D. Simon, “Optimal State Estimation, Kalman, H∞, and Nonlinear Approaches.”, John Wiley & Sons, 2006.
    [46]W. W. Kao, I. J. Chu, “Visual Positioning with Image Database and Range Camera.”, ION GNSS+ 2013, Nashville, Tennessee, USA, Sep. 2013.
    [47]C. E. Jacobs, A. Finkelstein, and D. H. Salesin, “Fast multiresolution image querying.”, Computer Graphics, 29, pp.277–286, 1995.
    [48]Zhengyou Zhang, “Flexible Camera Calibration By Viewing a Plane From Unknown Orientations.”, IEEE International Conference on Computer Vision. 1, 666-673.
    [49]Albert Diosi and Lindsay Kleeman, “Laser Scan Matching in Polar Coordinates with Application to SLAM.”, ARC in Monash University, Australia.
    [50]高維文,國科會計畫NSC 102-2221-E-011-111-。
    [51]劉晉嘉,“結合無線訊號強度與單一相機資訊SLAM的室內定位方法”,國立臺灣科技大學機械工程系碩士論文,臺北,2011。
    [52]李家欣,“多視點輔助定位系統”,國立臺灣科技大學機械工程系碩士論文,臺北,2007。
    [53]陳芝蓉,“影像與感測器輔助之個人方位推估法計算”,國立臺灣科技大學機械工程系碩士論文,臺北,2008。
    [54]許竣揚,“單一相機同時定位與環境地圖建製於二維移動載具之實現”,國立臺灣科技大學機械工程系碩士論文,臺北,2011。
    [55]李亞翰,“混合立體視覺與光散斑的測距技術”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
    [56]褚一任,“基於資料庫影像之立體視覺定位”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
    [57]胡皓翔,“運用街景影像資料庫的智慧型手機定位”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
    [58]蕭詠稜,“以距離感測器為基礎之室內同步定位與環境地圖實現”,國立臺灣科技大學機械工程系碩士論文,臺北,2013。
    [59]林師泓,“雷射測距儀輔助無線感測器網路於動態室內定位之應用”,國立臺灣科技大學機械工程系碩士論文,臺北,2010。
    [60]葛定寰,“非線性估測器於動態室內定位應用”,國立臺灣科技大學機械工程系碩士論文,臺北,2010。
    [61]薛旭佑,“環境地圖資料點擬合方法之改善”,逢甲大學航太與系統工程學系碩士論文,臺中,2011。
    [62]Range-Finder Type Laser Scanner URG-04LX Specifications , HOKUYO 2005.
    [63]Greg Borenstein,《3D視覺專題製作:Kinect、Processing、Arduino及MakerBot》, by, 978-9-862-76767-2.
    [64]國立臺灣師範大學資訊工程學系研究資源-演算法筆記
    [65]Radu Bogdan Rusu, Steve Cousins, Willow Garage,“3D is here: Point Cloud Library(PCL).”, 68 Willow Rd., Menlo Park, CA 94025, USA.

    QR CODE