簡易檢索 / 詳目顯示

研究生: 阮雄維
Nguyen - Hung Duy
論文名稱: 針對走廊環境結合雷射測距儀以及全向鏡的三維模型重建技術
3D Model Reconstruction of Corridor Environment by Data fusion of Laser range finder and Omnidirectional Camera
指導教授: 鍾聖倫
Sheng-Luen Chung
項天瑞
Tien-Ruey Hsiang
口試委員: 黃義盛
none
徐繼聖
none
鄧惟中
none
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2011
畢業學年度: 99
語文別: 英文
論文頁數: 80
中文關鍵詞: 三維環境模型重建全向鏡疊代最接近點演算法掃描比對
外文關鍵詞: Reconstruction, 3D model, omnidirectional camera, ICP, registration
相關次數: 點閱:143下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出一個針對走廊環境結合二維雷射測距儀以及全向鏡的三維模型重建技術。感測器的資料預先會以「停止擷取再移動」的方式來收集。每一筆感測資料包含了一張全向影像以及對周圍環境之雷射測距掃描。針對取感測資料的兩兩相鄰位置,局部的影像特徵可建立對應點,而對應點的深度可從雷射測距儀得知。在獲取了對應點集合以及其深度資訊後,兩兩相鄰位置間的相對位移可採用「隨機抽樣一致性演算法」(RANSAC)做初步的估算,之後再採用「掃描比對」(scan-matching)的方式做精確化。本論文中,掃描比對使用了依據顏色為對應點限制的「疊代最接近點」(Iterative closest point)演算法,計算出的相對位移將參考一個全域座標系統來紀錄。如此反覆上述的步驟處理兩兩相鄰位置上的感測資料便可以逐漸地建立起一個三維彩色的環境模型。

    本技術在一個60公尺乘上100公尺的多走廊環境中做測試,449個掃描中總共有9484805個點用來建立三維彩色的環境模型。此外,本論文以一個頭尾相連的封閉路徑來評估環境模型的誤差,在路徑起始點和終點為相同位置的情況下,誤差即定義為本技術估測出終點位置和起始點位置的差異。實驗結果說明了本技術可以用來建立一起準確的三維彩色的環境模型。


    This thesis describes a framework to reconstruct a 3D color model of corridor environment by using a 2D laser range finder and an omnidirectional camera. Data set is firstly collected with stop-scan-go fashion. Each of data scans contains an omnidirectional image and its laser readings. Local visual features are obtained through each image to identify correspondences and the depth perception of matching features from each two consecutive images is acquired with the range measurement of laser scanner. Given a set of correspondences, a relative motion between each two consecutive frames is firstly estimated based on RANSAC (random sample consensus) method and the result is refined by using scan matching technique. After that, the computed relative motion is transformed into 3D coordinate system to be used as an initial guess for a pair-wise color constraint-based registration ICP (Iterative Closest Point). The result of this registration is used to convert a new scan into its preceding scan then to integrate into the global coordinate. This above process is repeatedly applied for each pair of successive scans to incrementally build a complete 3D color model.
    The proposed framework is tested in corridor environment of 60 m x 100 m. A total of 9 484 805 points are extracted from 449 scans to construct the entire 3D color model. To measure the accuracy of method, a robot was set to move in a loop path. The error could be considered as the difference between the starting point and ending point. The result showed that proposed framework can obtain an accurate model with both color and 3D information.

    Table of Contents Acknowledgement IV Table of Contents V Abstract VIII List of Figures IX List of Tables XII List of Symbols XIII Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Definition 3 1.3 Assumptions 5 1.4 Contribution of the presented work 6 1.5 Thesis outline 7 Chapter 2 Research Objectives 8 2.1 Objectives 8 2.2 Sensor system 9 2.2.1 Laser Range Finder 9 2.2.2 Omnidirectional Camera 12 2.3 Technical Challenges 15 2.4 Approach 17 Chapter 3 Literature Survey 18 3.1 Omnidirectional camera calibration 18 3.2 Laser and Camera Calibration 19 3.3 3D Reconstruction by using feature matching and registration method 20 Chapter 4 Approach 22 4.1 Method overview 22 4.2 Sensor calibration 24 4.2.1 Omnidirectional Camera calibration model 24 4.2.2 Laser-Camera Calibration 28 4.3 Data Fusion 32 4.4 Model Generation 34 4.4.1 Computing 2D relative motion 34 4.4.2 Transform 2D relative poses to 3D coordinate 39 4.4.3 Registration process 40 Chapter 5 Experimental Results 46 5.1 Calibration results 46 5.1.1 Omnidirectional camera calibration 46 5.1.2 Calibration between omnidirectional camera and laser range finder 48 5.2 Corridor Reconstruction 50 5.3 Performance evaluation 52 Chapter 6 Conclusions 54 6.1 Summary 54 6.2 Future Works 55 Appendix A 56 Appendix B 59 B.1 The traditional ICP 60 B.2 The color ICP 61 References 64

    [1] H. Andreasson and A. J. Lilienthal, “6D Scan Registration using Depth-Interpolated Local Image Features,” Robotics and Autonomous Systems, vol. 58, no. 2, 2010.
    [2] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-D point sets,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 9(5), pp. 698-700, 1987.
    [3] E. B. Bacca, E. Mouaddib and X. Cufi, “Embedding range information in omnidirectional images through laser range finder,” IEEE/RSJ International Conference on Intelligent Robot and system, pp.2053-2058, 2010.
    [4] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 14(2), pp.239-256, 1992.
    [5] P. Biber, H. Andreasson, T. Duckett and A. Schilling, “3D Modeling of Indoor Environments by a Mobile Robot with a Laser Scanner and Panoramic Camera,” Proc. of IEEE/RSJ International Conference on Intelligent Robot and system, vol. 4, pp. 3430, 2004.
    [6] P. Biber, S. Fleck and T. Duckett, ”3D Modeling of Indoor Environments for a Robotic security guard,” Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 124, 2005.
    [7] J. L. Blanco, J. Gonzalez and J. A. F. Madrigal, ”A new method for robust and efficient occupancy grid-map matching,” In Proc of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part II, Springer-Verlag Berlin, Heidelberg c2007.
    [8] Y. Bo, Y. B. Hwang, and I. S. Kweon, “Accurate Motion Estiamation and High-precision 3D reconstruction by Sensor fusion,” IEEE International Conference on Robotics and Automation, pp. 4721 – 4726, 2007.
    [9] H. M. Chen and T. H. Lin, ” An algorithm to build convex hulls for 3-D objects,” Journal of the Chinese Institute of Engineers, vol. 29(6), pp. 945-952, 2006.
    [10] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Commun. ACM, vol. 24, pp. 381-395, 1981.
    [11] A. Johnson and S. B. Kang, “Registration and integration of textured 3-D data,” In Proc. Int. Conf. On Recent Advances in 3-D Digital Imaging and Modeling, pp 234-241, May 1997.
    [12] S. B. Kang, “Catadioptric self-calibration,” IEEE Conference on Computer Vision and Pattern Recogniton, pp. 201-207, 2000.
    [13] D. G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
    [14] F. Lu and E. Milios, “Robot pose estimation in unknown environments by matching 2D range scans,” Journal of Intelligent and Robotic Systems, vol. 18, no. 3, pp. 935–938, 1997.
    [15] C. Mei and P. Rives, ”Calibration between a Central Catadioptric Camera and a Laser Range Finder for robotic applications,” In Prof. of IEEE International Conference on Robotics and Automation, pp. 532-537, 2006.
    [16] C. Mei and P. Rives, ”Single View Point Omnidirectional Camera Calibration from Planar Grids,” IEEE International Conference on Robotics and Automation, pp. 3945 - 3950, 2007.
    [17] B. Micusik and T. Pajdla, “Estimation of omnidiretional camera model from epipolar geometry,” In Proc. Of IEEE Conference on Computer Vision and Pattern Recogniton, pp. 485-490, 2003.
    [18] B. Micusik and T. Pajdla, “Para-catadioptric Camera Autocalibration from Epipolar Geometry,” Asian Conference on Computer Vision, pp. 748-753, January 2004.
    [19] A. Nucher, K. Lingemann, J. Hertzberg and H. Surmann, “Heuristic-based laser scan matching for outdoor 6D SLAM,” In Advances in Artificial Intelligence. Proceeding Springer LNAI. 28th Annual German Conference on, vol. 3698, pp. 304-319, 2005.
    [20] Y. Okubo, C. Ye, and J. Borenstein, ” Characterization of the Hokuyo URG-04LX Laser Rangefinder for Mobile Robot Obstacle Negotiation,” SPIE Defense, Security + Sensing, Unmanned Systems Technology XI, Conference 7332: Unmanned, Robotic, and Layered Systems. Orlando, FL, April 13-17, 2009.
    [21] D. Ortin, J. Neira, and J. M. M. Moltiel, “Relocation using Laser and Vision”, in Proc. of the IEEE International Conference on Robotics and Automation, vol. 2, pp. 1505 – 1510, 2004.
    [22] R. Unnikrishnan and M. Herbert, “Fast Extrinsic Calibration of a laser range finder to a Camera,” Tech. Report CMU-RI-TR-05-09, Robotics Institute, Carnegie Mellon University, July 2005.
    [23] D. Scaramuzza, A. Martinelli and R. Siegwart, “A Toolbox for Easily Calibrating Omnidirectional Cameras,” IEEE/RSJ International Conference on Intelligent Robot and system, pp. 5695 – 5701, Beijing China, October 7-15, 2006.
    [24] D. Scaramuzza, A. Harati and R. Siegwart, “Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes,” IEEE/RSJ International Conference on Intelligent Robot and system, pp. 4164-4169, 2007.
    [25] J. H. Young, K. H. An, J. W. Kang, M. J. Chung and W. Yu, “3D Environment Reconstruction using modified color ICP algorithm by Fusion of a camera and a 3D Laser Range Finder,” IEEE/RSJ International Conference on Intelligent Robot and system, pp. 3082 – 3088, 2009.
    [26] Q. Zhang and R. Pless, “Extrinsic Calibration of a camera and Laser Range Finder,” In Proc. of IEEE/RSJ International Conference on Intelligent Robots and System, vol. 3, pp. 2301-2306, 2004.
    [27] Z. Zhang, “Iterative Point Matching for Registration of Free-Form Curves and Surfaces,” International Journal of Computer Vision, vol. 13:2, pp. 119-152, 1994.
    [28] http://www.hokuyo-aut.jp/02sensor/07scanner/urg_04lx_ug01.html.
    [29] http://www.vstone.co.jp/english/products/sensor_camera/
    [30] D. Scaramuzza, An omnidirectional Camera toolbox’s tutorial http://robotics.ethz.ch/~scaramuzza/Davide_Scaramuzza_files/Research/OcamCalib_Tutorial.htm#omnimodelpartial.
    [31] http://meshlab.sourceforge.net/
    [32] http://www.drrobot.com/products_item.asp?itemNumber=x80

    QR CODE