簡易檢索 / 詳目顯示

研究生: 陳致良
Zhi-Liang Chen
論文名稱: 應用深度相機之室內研究定位改進
Depth Camera - Assisted Indoor Localization Enhancement
指導教授: 項天瑞
Tien-Ruey Hsiang
口試委員: 陳建中
Jiann-Jone Chen
李育杰
Yuh-Jye Lee
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 63
中文關鍵詞: 室內定位SIFT影像特徵深度掃描三角定位
外文關鍵詞: Indoor Localization, SIFT Descriptor, Depth map, Triangulation
相關次數: 點閱:191下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇論文利用深度攝影機重建點雲環境,取得環境資訊作定位。研究方法主要分成三個階段:建置點雲環境、虛擬照片拍攝及資料庫建置、特徵點定位,建置點雲環境的階段主要利用RANSAC 作初步的點雲重合,在利用Graph Slam 作拍攝路徑的最佳化,使得深度重合能夠擺放在理想的位置。藉由虛擬照片建置的影像資料庫,能夠取得一般相機所缺乏的定位資訊,像是距離景物較遠的距離,抑或是被障礙物遮蔽所導致特徵資訊缺乏的情形發生。在最後特徵點定位的階段,利用SIFT 將虛擬相片與代訂為相片作比對,求出的特徵點根據位置及其夾角進行三角定位。在實驗中,我們方法改善了定位成功的覆蓋率,以及平均誤差,說明照片的拍攝位置與角度,都會影響定位資訊的取得,藉由改善相機分布與角度的情況,能夠有效的減低定位誤差。


    This paper develops an approach for image triangulation from point cloud.
    This approach can be divided into three parts: reconstructing environment, virtual images database establishment and triangulation. During constructing virtual images database, we can acquire extra localization information which traditional image localization lacks. When camera is far from scene or camera is sheltered by objects, traditional SIFT localization may decrease the accuracy. Our approach provides higher localization accuracy and coverage ratio by choosing better camera angles and positions automatically. In experiments, we take practical localization by traditional SIFT localization and virtual images triangulation to compare result.

    論文指導教授推薦書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i 考試委員審定書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv 誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v 內文目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 圖目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii 表目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 研究背景介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 研究目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 文獻探討. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 平面影像定位類型分析. . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 RGB-D Mapping 3D 環境建置. . . . . . . . . . . . . . . . . . . . . 7 2.3 Visual SLAM 問題描述與建置3D 環境的應用. . . . . . . . . . . . . 9 2.4 3D 投影平面深度內插研究. . . . . . . . . . . . . . . . . . . . . . . . 12 3 虛擬影像定位方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1 虛擬影像定位流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 深度資料採集與建立虛擬環境. . . . . . . . . . . . . . . . . . . . . . 15 3.2.1 RGB-D 資料採集與建置點雲. . . . . . . . . . . . . . . . . . 17 3.2.2 座標系角度調整及制定邊界範圍. . . . . . . . . . . . . . . . 19 3.3 虛擬相機設置及影像資料庫建置. . . . . . . . . . . . . . . . . . . . 20 3.3.1 虛擬相機位置分布機制. . . . . . . . . . . . . . . . . . . . . 22 3.3.2 虛擬相片成像原理. . . . . . . . . . . . . . . . . . . . . . . . 23 3.3.3 攝影機角度過濾機制. . . . . . . . . . . . . . . . . . . . . . . 26 3.3.4 儲存虛擬相機影像建立資料庫. . . . . . . . . . . . . . . . . . 28 3.4 虛擬影像特徵點三角定位. . . . . . . . . . . . . . . . . . . . . . . . 29 3.4.1 虛擬相片選擇決定機制. . . . . . . . . . . . . . . . . . . . . 29 3.4.2 特徵點定位原理. . . . . . . . . . . . . . . . . . . . . . . . . 32 4 定位實驗成果比較分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1 景物單一情境下定位結果分析. . . . . . . . . . . . . . . . . . . . . . 35 4.1.1 特徵點數量趨勢. . . . . . . . . . . . . . . . . . . . . . . . . 36 4.1.2 間距定位誤差趨勢. . . . . . . . . . . . . . . . . . . . . . . . 38 4.1.3 定位精準度覆蓋率以及平均定位誤差分布情況. . . . . . . . . 40 4.2 一般室內環境定位實驗. . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2.1 定位覆蓋率與定位平均誤差分析比較. . . . . . . . . . . . . . 43 4.2.2 影像資料庫儲存空間與定位時間效能分析. . . . . . . . . . . 46 5 結論與未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    [1] I. Bekkerman and J. Tabrikian, “Target detection and localization using mimo
    radars and sonars,” IEEE Transactions on Signal Processing, vol. 54, no. 10,
    pp. 3873–3883, 2006.
    [2] C.-H. Lim, Y. Wan, B.-P. Ng, and C.-M. See, “A real-time indoor wifi localization
    system utilizing smart antennas,” IEEE Transactions on Consumer
    Electronics, vol. 53, no. 2, pp. 618–622, 2007.
    [3] J. Zhou and J. Shi, “RFID localization algorithms and applications—a review,”
    Journal of Intelligent Manufacturing, vol. 20, pp. 695–707, 2009.
    [4] B. Zitova and J. Flusser, “Image registration methods: a survey,” Image and
    Vision Computing, vol. 21, pp. 977–1000, 2003.
    [5] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International
    Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
    [6] C. Harris and M. Stephens, “A combined corner and edge detector,” In Proc.
    of Fourth Alvey Vision Conference, pp. 147–151, 1988.
    [7] S. Thrun, D. Fox, W. Burgard, and D. F., “Robust Monte Carlo localization
    for mobile robots,” Artificial Intelligence, vol. 128, pp. 99–141, 2001.
    [8] D. Cobzas, H. Zhang, and M. Jagersand, “Image-based localization with depthenhanced
    image map,” IEEE International Conference on Robotics and Automation
    (ICRA), vol. 2, pp. 1570 – 1575, 2003.
    [9] R. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” Proceedings
    - IEEE International Conference on Robotics and Automation (ICRA), pp. 1–
    4, 2011.
    [10] K. Muthukrishnan, M. Lijding, and P. Havinga, “Towards smart surroundings:
    Enabling techniques and technologies for localization,” Lecture Notes in
    Computer Science, vol. 3479, pp. 350–362, 2005.
    49
    [11] J. Wang, R. Cipolla, and H. Zha, “Vision-based global localization using a
    visual vocabulary,” Proceedings of the 2005 IEEE International Conference on
    Robotics and Automation (ICRA), pp. 4230–4235, 2005.
    [12] H. Andreasson, A. Treptow, and T. Duckett, “Localization for mobile robots
    using panoramic vision, local features and particle filter,” Proceedings of the
    2005 IEEE International Conference on Robotics and Automation (ICRA), pp.
    3348–3353, 2005.
    [13] T. Liu, M. Carlberg, G. Chen, J. Chen, J. Kua, and A. Zakhor, “Indoor localization
    and visualization using a human-operated backpack system,” 2010
    International Conference on Indoor Positioning and Indoor Navigation (IPIN),
    pp. 1–10, 2010.
    [14] F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual navigation for mobile robots: A
    survey,” Journal of Intelligent and Robotic Systems: Theory and Applications,
    vol. 53, no. 3, pp. 263–296, 2008.
    [15] D. Yuen and B. MacDonald, “Vision-based localization algorithm based on
    landmark matching, triangulation, reconstruction, and comparison,” IEEE
    Transactions on Robotics, vol. 21, no. 2, pp. 217–226, 2005.
    [16] J. Kosecka and F. Li, “Vision based topological markov localization,” IEEE
    International Conference on Robotics and Automation (ICRA), vol. 2, pp. 1481–
    1486, 2004.
    [17] J. Wolf, W. Burgard, and H. Burkhardt, “Robust vision-based localization
    by combining an image-retrieval system with Monte Carlo localization,” IEEE
    Transactions on Robotics, vol. 21, no. 2, pp. 208–216, 2005.
    [18] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using
    kinect-style depth cameras for dense 3D modeling of indoor environments,”
    International Journal of Robotics Research, vol. 31, no. 5, pp. 647–663, 2012.
    [19] J. Fuentes-Pacheco, J. Ruiz-Ascencio, and J. Rendon-Mancha, “Visual simultaneous
    localization and mapping: a survey,” Artificial Intelligence Review, pp.
    1–27, 2012.
    50
    [20] K. Ho and P. Newman, “Detecting loop closure with scene sequences,” International
    Journal of Computer Vision, vol. 74, no. 3, pp. 261–286, 2007.
    [21] E. Eade and T. Drummond, “Monocular slam as a graph of coalesced observations,”
    Proceedings of the IEEE International Conference on Computer Vision,
    pp. 1–8, 2007.
    [22] D. Chekhlov, W. Mayol-cuevas, and A. Calway, “Appearance based indexing
    for relocalisation in real-time visual slam,” In 19th Bristish Machine Vision
    Conference, pp. 363–372, 2008.
    [23] A. Gil, scar Reinoso, M. Ballesta, and M. Julie, “Multi-robot visual slam using
    a rao-blackwellized particle filter,” Robotics and Autonomous Systems, vol. 58,
    no. 1, pp. 68–80, 2010.
    [24] B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon, “Bundle adjustment
    —a modern synthesis,” Vision Algorithms: Theory and Practice, vol. 1883, pp.
    298–372, 2000.
    [25] K. Konolige, G. Grisetti, R. Kummerle, W. Burgard, B. Limketkai, and R. Vincent,
    “Efficient sparse pose adjustment for 2D mapping,” IEEE/RSJ International
    Conference on Intelligent Robots and Systems (IROS), pp. 22–29, 2010.
    [26] S. Zhang, “Recent progresses on real-time 3D shape measurement using digital
    fringe projection techniques,” Optics and Lasers in Engineering, vol. 48, no. 2,
    pp. 149–158, 2010.
    [27] C. Riechert, F. Zilly, M. Muller, and P. Kauff, “Advanced interpolation filters
    for depth image based rendering,” 3DTV-Conference: The True Vision -
    Capture, Transmission and Display of 3D Video (3DTV-CON), pp. 1–4, 2012.
    [28] B. Oh, M. Chen, J. Dorsey, and F. Durand, “Image-based modeling and photo
    editing,” ACM Special Interest Group on GRAPHics and Interactive Techniques
    (SIGGRAPH) Papers - International Conference on Computer Graphics and
    Interactive Techniques, 2001.
    51
    [29] D. Wang, J. Liu, J. Sun, W. Liu, and Y. Li, “A novel key-frame extraction
    method for semi-automatic 2D-to-3D video conversion,” IEEE International
    Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), pp.
    1–5, 2012.
    [30] H. Du, P. Henry, X. Ren, M. Cheng, D. B. Goldman, S. M. Seitz, and D. Fox,
    “Interactive 3D modeling of indoor environments with a consumer depth camera,”
    Proceedings of the 13th international conference on Ubiquitous computing
    - UbiComp ’11, pp. 75–84, 2011.
    [31] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for
    model fitting with applications to image analysis and automated cartography.”
    Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.

    QR CODE