簡易檢索 / 詳目顯示

研究生: 劉世寬
Shih-Kuan Liu
論文名稱: 利用離散影像中途點的室內機器人巡航系統
An Indoor Robot Navigation System Using Sparse Image Waypoints
指導教授: 項天瑞
Tien-ruey Hsiang
口試委員: 楊傳凱
Chuan-kai Yang
鄧惟中
Wei-chung Teng
陳建中
Jiann-jone Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2011
畢業學年度: 99
語文別: 英文
論文頁數: 44
中文關鍵詞: 影像金字塔對數極座標影像比對尺度不變特徵轉換對極幾何光流法直方圖等化牆面跟隨演算法
外文關鍵詞: image pyramid, Log-Polar image matching, Scale-Invariant Feature Transform (SIFT), Epipolar geometry, Optical flow, Histogram equalization, Wall Following Algorithm
相關次數: 點閱:319下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文設計了一個利用分散圖片當作航點的室內服務機器人系統。當系統與實驗
    環境部屬完之後,我們讓機器人走訪整個實驗環境同時利用usb 攝影機藉由擷取
    圖片紀錄該過程,進而利用這些圖片自動計算出航點序列。為了讓導航模式更完
    整我們定義了五種航點的種類。在基礎的方法之中,機器人利用比較快速的模式
    去經過中介航點,接著利用較為精準的模式去調整並接近目的地。此外我們也提
    出了另一種利用修改過的Log-Polar 比對導航方法去引導機器人返航。這個方法
    不僅減少了在學習階段的時間也同時縮小了參考影像的資料厙大小。在基礎的方
    法中我們減少了19 個百分比的導航時間藉由把快速模式應用到中介航點上。藉由
    我們的自動選擇參考圖像的方法與快速的運動模式,一個160 公尺的模擬導航路
    徑可在11 分鐘左右完成並且只有些許可容忍的誤差。


    A prototype of indoor service robot system using sparse image waypoints to navigate is developed. Upon deployment, the robot visually learns the work environment by traversing along the topology of the environment and automatically computing missing waypoints between service locations. Five types of reference images are used by the robot. While approaching a service target, the robot adopts a fast motion mode when traveling between intermediate waypoints, then switch to accurate motion mode when it is close to the target. Also an alternative navigation approach is proposed which pilots the robot back to the home position by an modified log-polar matching method for a discernible feature environment. The approach reserves processing time for the pre-training phase and reduces the size of the reference image database. By applying the fast mode to pass intermediate waypoints, the navigation time is shortened about 19 percent compared with the previous work of Fu et al. With the reference images generate by the proposed method and the fast navigation mode, a 160 meter navigation route can be finished in about 11 minutes with tolerable errors.

    論文指導教授推薦書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i 考試委員審定書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv 誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 2 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 Log-Polar Transformation and Matching . . . . . . . . . . . . . . . . 4 2.2 Epipolar geometry and SIFT . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Wall Following Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 6 2.5 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.6 Experimental platform . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 3 Fast motion mode with forward image cue . . . . . . . . . . . . . 11 3.1 Reference image construction procedure for fast mode . . . . . . . . . 11 3.2 The strategy to pass through the auxiliary marker . . . . . . . . . . . 15 Chapter 4 Alternative Backward Navigation Approach . . . . . . . . . . . . 17 4.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.1.1 Noise in the middle part of image . . . . . . . . . . . . . . . . 17 4.1.2 The side of matching . . . . . . . . . . . . . . . . . . . . . . . 18 4.1.3 The order of scenery . . . . . . . . . . . . . . . . . . . . . . . 18 4.1.4 The geometric properties of scenery . . . . . . . . . . . . . . . 19 4.2 Modification of Log-Polar image matching for Alternative Backward navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3 Reference Image Construction for Alternative Backward Navigation . 20 4.4 Motion Model for Alternative Backward Navigation . . . . . . . . . . 22 4.4.1 The steering angle for the relative position between heading of robot and waypoint . . . . . . . . . . . . . . . . . . . . . . 25 4.4.2 Absolute position of the two Log-Polar regions . . . . . . . . . 26 4.4.3 Mass of center of the two Log-Polar regions . . . . . . . . . . 28 4.4.4 Termination condition for each waypoint . . . . . . . . . . . . 31 Chapter 5 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.1 Experiments of speed improvement for fast mode . . . . . . . . . . . 33 5.2 Experiments of service mission simulation for fast mode . . . . . . . . 35 5.3 Experiments of gap distance for backward navigation . . . . . . . . . 37 Chapter 6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . 41 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    [1] I. Buciu, A. Gacs′adi, and C. Grava, “Vision based approaches for driver assistance
    systems,” pp. 92–97, 2010.
    [2] F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual navigation for mobile robots:
    A survey,” Journal of Intelligent and Robotic Systems, vol. 53, no. 3, pp. 263–
    296, 2008.
    [3] Z. Chen and S. Birchfield, “Qualitative vision-based path following,” Mechatronics,
    vol. 5, no. 1, pp. 39–48, 2000.
    [4] G. Lopez-Nicolas, C. Sagues, J. Guerrero, D. Kragic, and P. Jensfelt, “Switching
    visual control based on epipoles for mobile robots,” Robotics and Autonomous
    Systems, vol. 56, no. 7, pp. 592 – 603, 2008.
    [5] Y. Fu, T. Hsiang, and S. Chung, “Sequence-based autonomous robot navigation
    by log-polar image transform and epipolar geometry,” in Systems Man
    and Cybernetics (SMC), 2010 IEEE International Conference on, pp. 493–499,
    IEEE.
    [6] A. Zhang and L. Kleeman, “Robust appearance based visual route following
    for navigation in large-scale outdoor environments,” The International Journal
    of Robotics Research, vol. 28, no. 3, p. 331, 2009.
    [7] J. Ido, Y. Shimizu, Y. Matsumoto, and T. Ogasawara, “Indoor navigation for a
    humanoid robot using a view sequence,” The International Journal of Robotics
    Research, vol. 28, no. 2, p. 315, 2009.
    [8] J. Courbon, Y. Mezouar, and P. Martinet, “Indoor navigation of a nonholonomic
    mobile robot using a visual memory,” Autonomous Robots, vol. 25,
    no. 3, pp. 253–266, 2008.
    [9] G. L′opez-Nicol′as, N. Gans, S. Bhattacharya, C. Sag
    ”u′es, J. Guerrero, and S. Hutchinson, “Homography-based control scheme formobile robots with nonholonomic and field-of-view constraints.,” IEEE transactions
    on systems, man, and cybernetics. Part B, Cybernetics: a publication
    of the IEEE Systems, Man, and Cybernetics Society, 2009.
    [10] A. Argyros, K. Bekris, S. Orphanoudakis, and L. Kavraki, “Robot homing by
    exploiting panoramic vision,” Autonomous Robots, vol. 19, no. 1, pp. 7–25,
    2005.
    [11] G. L′opez-Nicol′as, J. Guerrero, and C. Sag
    ”u′es, “Multiple homographies with omnidirectional vision for robot homing,”
    Robotics and Autonomous Systems, 2010.
    [12] N. Saini and A. Sinha, “Optics based biometric encryption using log polar
    transform,” Optics Communications, vol. 283, no. 1, pp. 34–43, 2010.
    [13] A. Wohrer and P. Kornprobst, “Virtual retina: A biological retina model and
    simulator, with contrast gain control,” Journal of computational neuroscience,
    vol. 26, no. 2, pp. 219–249, 2009.
    [14] F. Pardo, J. Boluda, I. Coma, and F. Mic′o, “High speed log-polar time to crash
    calculation for mobile vehicles,” Image Processing & Communications, vol. 8,
    no. 2, pp. 23–32, 2002.
    [15] S. Zokai and G. Wolberg, “Image registration using log-polar mappings for
    recovery of large-scale similarity and projective transformations,” IEEE Transactions
    on Image Processing, vol. 14, no. 10, 2005.
    [16] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International
    journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
    [17] C. Wu, “SiftGPU: A GPU implementation of scale invariant feature transform
    (SIFT).” http://cs.unc.edu/~ccwu/siftgpu, 2007.
    [18] R. Hartley and A. Zisserman, “Multiple view geometry in computer vision,”
    2003.
    [19] T. Acharya and A. Ray, Image processing: principles and applications. Wiley-
    Interscience, 2005.
    [20] J. Blankenship and S. Mishal, Robot Programmer’s Bonanza. McGraw-Hill,
    2008.
    [21] S. Yu and D. Kim, “Image-based homing navigation with landmark arrangement
    matching,” Information Sciences, 2011.

    QR CODE