簡易檢索 / 詳目顯示

研究生: 陳健庭
Chien-Ting Chen
論文名稱: 外觀基礎地點辨識之地面無人載具
Appearance-based Place Recognition on Unmanned Ground Vehicle
指導教授: 李敏凡
Min-Fan Ricky Lee
口試委員: 蔡明忠
李敏凡
湯梓辰
邱富信
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 41
中文關鍵詞: 外觀基礎地點識別機器人
外文關鍵詞: Appearance-based, Place Recognition, Robot
相關次數: 點閱:122下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 智慧型機器人研究中,視覺感測器可被執行在不同應用領域中,在移動式機器人的
    導航任務中視覺感測也是近年不可被忽略的技術,但視覺感測會因為環境因素導致導航
    認知的誤判,尤其在外觀看似相似但實際上卻不同地點,或著外觀改變但實際上卻是同
    個地點,這都會造成視覺導航中的環型情況發生。Loop Closure Detection (LCD)是在
    Simultaneous Localization and Mapping (SLAM)導航機制中透過機器視覺匹配目前影
    像與過去影像,並給出高相似度的結果以表示機器人曾經拜訪過,而 Bags of Words (BoW)
    是一種解決 LCD 的方法,本文架構一台地面無人機並觀察 BoW 運行的成效。在地面無人
    機透過 BoW 方法與燈光昏暗環境下實驗中,實驗結果透過詞彙比對過去圖像與當前圖像
    並給出 82%召回率,此外在詞彙比對中也顯示本文系統的可行性,透過本文的系統提供
    了可擴充、維修簡易、低成本的研究用地面無人機系統,而改裝的便利性在不同任務目
    下可降低研究成本。


    In the research of intelligent robots, visual sensors can be implemented in different application fields. In the navigation tasks of mobile robots, visual sensing is also a technology that cannot be ignored in recent years, but visual sensing will cause the misunderstanding of cognition in navigation due to environmental factors. Especially, the appearance of environment is similar but actually different places, or the appearance of environment changes but is the same place, which will cause the situation of loop closure in visual navigation mission.
    Loop Closure Detection (LCD) is to match the current image and the past image through machine vision in the one part of Simultaneous Localization and Mapping (SLAM) navigation mechanism and gives high similar results to indicate that robots have been visited. This paper attempts to build up an Unmanned Ground Vehicle (UGV) and observe the effectiveness of Bag of Words (BoW) in LCD mission. In field running in dark environment experiment, the UGV with BoW method successful accomplished on comparing the previous and current frames with 82% recall rate and the likelihood degree indicated the feasibility of this system. Through the system of this paper, it provides the ability of scalability, convenience maintain. And the convenience of modification can reduce the research cost under different researches tasks.

    ABSTRACT I 中文摘要 II List of Figures IIV List of Tables V Chapter 1 Introduction 1 1.1 Background 1 1.2 Literature Review 2 1.3 Purpose 3 1.4 Contribution 4 1.5 Structure Configuration of Thesis 4 Chapter 2 Method 6 2.1 Concept of Experiment 6 2.2 Process of Experiment 13 2.2.1 Kinematics of System 18 2.2.2 Loop Closure Detect: Bag of Words method 21 Chapter 3 Results 27 Chapter 4 Conclusion 33 4.1 Conclusion 33 4.2 Future Work 34 Chapter 5 References 38

    [1] Stephanie Lowry, Niko S¨underhauf, Paul Newman, " Visual Place Recognition: A Survey," in IEEE TRANSACTIONS ON ROBOTICS, VOL. 32, NO. 1, FEBRUARY 2016.

    [2] P. Fujun, W. Xiaoping and Y. Hong, "Distributed SLAM System Using Particle Swarm Optimized Particle Filter for Mobile Robot Navigation," in IEEE International Conference on Mechatronics and Automation, pp. 994-999, Aug. 2016

    [3] G. Andreas and L. Philip, " Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite," in IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354-3361, 2012.

    [4] Jianbin Chen, Jun Li, Yang Xu, Guangtian Shen, Yangjian Gao, " a Compact Loop Closure Detection Based on Spatial Partitioning," in IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2626-2635, 2018.

    [5] Xiang Gao, Rui Wang, Nikolaus Demmel and Daniel Cremers, " LDSO: Direct Sparse Odometry with Loop Closure," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Madrid, Spain, October 1-5, 2018.

    [6] Yamamoto Ryohei, Tanaka Kanji, Takeda Koji" Invariant Spatial Information for Loop-Closure Detection," in 16th International Conference on Machine Vision Applications (MVA) National Olympics Memorial Youth Center, Tokyo, Japan, May 27-31, 2019.

    [7] Hong Zhang, Bo Li and Dan Yang, " Keyframe Detection for Appearance-based Visual SLAM," in IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010.

    [8] Adrien Angeli, David Filliat, Stephane Doncieux, and Jean-Arcady Meyer, " Fast
    and Incremental Method for Loop-Closure Detection Using Bags of Visual Words," in IEEE TRANSACTIONS ON ROBOTICS, VOL. 24, NO. 5, OCTOBER 2008.

    [9] Francisco Bonin-Font, Pep LLuis Negre Carrasco, Antoni Burguera Burguera and Gabriel Oliver Codina, " LSH for Loop Closing Detection in Underwater Visual SLA," in IEEE Emerging Technology and Factory Automation (ETFA), 2014.

    [10] Yian Jiao, Guoshan Zhang, " A Loop Closure Detection Method Using Binary Feature Descriptors," in 13th World Congress on Intelligent Control and Automation July 4-8, 2018.

    [11] Emilio Garcia-Fidalgo and Alberto Ortiz, " iBoW-LCD An Appearance-Based Loop-Closure Detection Approach Using Incremental Bags of Binary Words," in IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 3, NO. 4, OCTOBER 2018.

    [12] Weidong Zhang, Wei Zhang, Kan Liu, and Jason Gu, " Learning to Predict High-Quality Edge Maps for Room Layout Estimation," in IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 19, NO. 5, MAY 2017.

    [13] Kai Zhang, Wei Zhang, " Loop Closure Detection Based on Generative Adversarial Networks for Simultaneous Localization and Mapping Systems," in IEEE Chinese Automation Congress (CAC). 2017.

    [14] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Computer Vision–ECCV 2006, pp. 404–417, Springer, 2006.

    [15] Selim, Shokri Z., and Mohamed A. Ismail. "K-means-type algorithms: a generalized convergence theorem and characterization of local optimality.1984

    [16] R. M. Abdel, G. Radi, and E. Amjad, "Navigation and Formation Control of a Tracked Robot Swarm for Firefighting Missions," in IEEE International Multidisciplinary Conference on Engineering Technology (IMCET), 2018.

    [17]C. L. Garzon,, H. R. Chamorro, M. M. Diaz, E. Sequeira and L. Leottau, "Swarm Ant Algorithm Incorporation for Navigation of Resource Collecting Robots," in 5th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), pp. 987-992, Aug. 2014.

    [18] H. Qiangui, W Weiyue and N. Ulrich, "Recurrent Slice Networks for 3D Segmentation of Point Clouds," in IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2626-2635, 2018.

    [19] P. Jeremie, S. Markus and W. Florentin, "Spatially Stratified Correspondence Sampling for Real-Time Point CloudTracking," in IEEE Winter Conference on Applications of Computer Vision, pp. 124-131, 2015.

    [20] Liu. Ming and S. Roland, "Navigation on Point-cloud - a Riemannian Metric approach," in IEEE International Conference on Robotics & Automation (ICRA), pp. 4088-4093, 2014

    [21] M. D. Phung, C. H. Quach, D. T. Chu, N. Q. Nguyen, T. H. Dinh and Q. P. Ha, " Automatic Interpretation of Unordered Point Cloud Data for UAV Navigation in Construction," in IEEE 14th International Conference on Control, Automation, Robotics & Vision, Nov. 2016.

    [22] Q. Kun, C. Zhijie, M. Xudong and Z. Bo, "Mobile Robot Navigation in Unknown Corridors using Line and Dense Features of Point Clouds," in IECON2015-Yokohama, pp. 1831-1836, Nov. 2015.

    [23] G. Fei and S. Shaojie, "Online Quadrotor Trajectory Generation and Autonomous Navigation on Point Clouds," in IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 139-146, Oct. 2016.

    [24] Srinath Ramachandran, Ferat Sahin. “Smart Walker V: Implementation of RTAB-Map Algorithm” in IEEE 20th International Conference on System of Systems Engineering, 2019.

    [25] Jingdao Chen, Yong Kwon Cho, Zsolt Kira, "Multi-View Incremental Segmentation of 3-D point Clouds for Mobile Robots", IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 4, NO. 2, APRIL 2019.

    [26] Min-Fan Ricky Lee, Li-Jung Yang, Wei-Yi Kong, "Disaster Response: Artificial Intelligence in Swarm Ground, Surface, Aerial and Underwater Robot ".

    [27] Yuki Endo, Kei Sato, Akihiro Yamashita, Katsushi Matsubayashi, "Indoor Positioning and Obstacle Detection for Visually Impaired Navigation System based on LSD-SLAM ", 2017 International Conference on Biometrics and Kansei Engineering.

    [28] Yonggen Ling, Shaojie Shen, "Building Maps for Autonomous Navigation Using Sparse Visual SLAM Features"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) September 24–28, 2017, Vancouver, BC, Canada

    [29] Seonwook Park, Thomas Schops and Marc Pollefeys "Illumination Change Robustness in Direct Visual SLAM", 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, May 29 - June 3, 2017.

    [30] Franz Andert and Stefan Krause "Optical Aircraft Navigation with Multi-Sensor SLAM and Infinite Depth Features",2017 International Conference on Unmanned Aircraft Systems (ICUAS) June 13-16, 2017, Miami, FL, USA

    [31] B. S. Imeen, S. Kramm, C. Demonceaux and V. Pascal, "Summarizing Large Scale 3D Point Cloud for Navigation Tasks," in IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017.

    無法下載圖示 全文公開日期 2024/08/27 (校內網路)
    全文公開日期 2024/08/27 (校外網路)
    全文公開日期 2024/08/27 (國家圖書館:臺灣博碩士論文系統)
    QR CODE