簡易檢索 / 詳目顯示

研究生: 傅宇
Yu - Fu
論文名稱: 移動式機器人使用稀疏航點影像為依據之快速歸位技術以及基於雲端計算之室內視覺定位系統
Fast Homing Techniques for Autonomous Robots using Sparse Image Waypoints and Design of Vision-based Indoor Localization as Cloud Computing Services
指導教授: 項天瑞
Tien-Ruey Hsiang
鍾聖倫
Sheng-Luen Chung
口試委員: 郭重顯
Chung-Hsien Kuo
蘇順豐
Shun-Feng Su
馮輝文
Huei-Wen Ferng
林其禹
Chyi-Yeu Lin
王傑智
Chieh-Chih Wang
李蔡彥
Tsai-Yen Li
鄭慕德
Mu-Der Jeng
學位類別: 博士
Doctor
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 英文
論文頁數: 109
中文關鍵詞: 視覺定位基於影像序列的導航雲端計算
外文關鍵詞: MapReduce computation framework, vision-based localization, navigation
相關次數: 點閱:359下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文針對機器人歸位的導航技術以及視覺定位技術提出加快導航以及定位的解決方案。對於機器人歸位,本論文先提出在片段線性路徑上多航點的視覺歸位技術,之後針對導航速度提出改進。對於固定規格的機器人而言,本論文提出的快速歸位技術旨在相同導航精確度的條件下,減少路徑上導航所需的時間。具體做法首先透過採用對於影像尺度變化有較大容忍性的log-polar轉換來加大路徑上航點間距,此外,針對相鄰航點的導航,本論文設計了兩階段的導航方式。於第一階段的導航中,機器人距離目標航點較遠,機器人以 log-polar 轉換找到和目標航點影像間的對應,透過此對應計算出一較快但不精確的運動向量來導航。當機器人距離目標航點較近時切換至第二階段的導航,機器人以SIFT特徵找到和目標航點影像間的對應群並從特徵群計算出較精確但慢的運動向量以維持導航的精確度。相較於同領域先前的研究,本方法以較稀疏的航點來表示路徑且在相同導航精確度下以較快的速度完成導航。
    另一方面在視覺定位技術,本論文提出了基於雲端計算的室內視覺定位系統。此定位系統針對環境以有限資料庫影像為地圖的定位方法提出提高定位機率的改良。由於有限資料庫影像往往不足以涵蓋整個環境中可能位置和面向下拍攝的影像,因此定位服務送出的詢問影像會因為較大的視角變化而無法找到相似的資料庫影像完成初步定位,本論文在此問題上提出在詢問影像上偵測ASIFT特徵並與資料庫影像上的SIFT特徵做比對的方法,然而,偵測ASIFT特徵增加的計算量會導致定位系統上影像比對計算量加重的後果。針對此龐大計算量的問題,本論文提出基於雲端MapReduce計算架構下的解決方案,透過兩層的MapReduce計算,包括第一層偵測詢問影像上的 ASIFT特徵以及第二層平行式的影像比對,本系統得以找到最接近的資料庫影像並進行三角位。實驗除了驗證本解決方案的可行性外,還與辭袋以及基於SIFT特徵的比對方式做比較來說明本系統對於定位機率上的提升。


    This thesis first proposes an approach of local visual homing for multi-waypoint
    robot homing in piecewise linear routes and reduces the navigation time by developing a fast robot homing approach. For a robot with fixed specification, the proposed fast robot homing approach aims to speed up navigation without compromising navigation accuracy. Compared to prior work on local visual homing with SIFT feature matching, the average distance between consecutive waypoints can be lengthened and the robot is allowed to depart at a higher speed from each waypoint. To improve the tolerance to scale differences in a purely SIFT-based approach, log-polar transform is used to find a circular correspondence. A faster but less accurate motion is designed when images are registered by log-polar transform in the beginning of the visual homing. After the robot is relatively close to a targeted waypoint, the more accurate approach of local visual homing is adopted to maintain the navigation accuracy. Experiments demonstrate that not only faster navigation with competitive navigation accuracy can be achieved, but also fewer waypoints are required in order to guide the robot back to its homing place.
    Besides the fast robot homing approach which is based on a topological map, this thesis also proposes a vision-based metric localization system with cloud computing for indoor environments. Compared to other vision-based localization researches which find the most similar database image to a query image from database images which are captured along a trajectory by using visual vocabulary or general SIFT feature matching approach, the proposed system can find the location of the query image when the query image largely differs in the viewing angle with the closest database image by matching ASIFT features in the query image with the SIFT features in the database images. Two heavy computation processes, the ASIFT feature detection in the query image and the image registration between the query image and database images, are calculated in Hadoop MapReduce computation framework in order to speed up the response to a request of localization service. Experiments not only demonstrate the performance and feasibility of the proposed localization system but also show higher localization correct rate by using the proposed approach than visual vocabulary and general SIFT feature matching approach when the environment is modeled by limited number of database images.

    教授推薦書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i 論文口試委員審定書 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii 中文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Table of contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Paper Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Approach of Multi-Waypoint Visual Homing . . . . . . . . . . . . . . . . . 6 2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Motion in Correspondence-based Local Visual Homing . . . . 6 2.1.2 Detection of the Arrival at a Waypoint . . . . . . . . . . . . . 8 2.2 Teaching Phase: Construction of Waypoint Images . . . . . . . . . . 8 2.3 Scale Invariant Feature Transform . . . . . . . . . . . . . . . . . . . . 9 vii2.4 Multi-Waypoint Visual Homing . . . . . . . . . . . . . . . . . . . . . 10 2.5 Summary of Multi-waypoint Visual Homing . . . . . . . . . . . . . . 17 3 Approach of Fast Robot Homing . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.1 Correspondence-based Local Visual Homing . . . . . . . . . . 19 3.1.2 Reduction of Navigation Time in Image Sequence-based Nav- igation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Waypoint Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3 Local Visual Homing . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.1 Log-Polar Transform and Image Matching by Using Log-Polar Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.2 Fast Local Visual Homing between Sparse Waypoints . . . . . 29 4 Localization System with Cloud Computing . . . . . . . . . . . . . . . . . 38 4.1 Related Work on Metric Localization . . . . . . . . . . . . . . . . . . 38 4.2 SIFT Feature-based 3D Map . . . . . . . . . . . . . . . . . . . . . . . 40 4.3 Localization System in Hadoop MapReduce Framework . . . . . . . . 43 4.3.1 Hierarchical Localization Algorithm . . . . . . . . . . . . . . . 44 4.3.2 Localization System in Hadoop MapReduce Framework . . . . 48 5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.1 Performance Evaluations of Multi-Waypoint Robot Homing . . . . . . 51 5.1.1 Variation of Vertical Displacement of Correspondences . . . . 51 5.1.2 Robot and Platform for Experiments . . . . . . . . . . . . . . 52 5.1.3 Experiments of Multi-waypoint Visual Homing . . . . . . . . . 55 5.1.4 Navigation Accuracy between Consecutive Waypoints . . . . . 57 5.1.5 Navigation Efficiency in Long Routes . . . . . . . . . . . . . . 60 5.2 Performance Evaluations of Fast Robot Homing . . . . . . . . . . . . 63 5.2.1 Robot Platform and Parameter Settings for Experiments . . . 63 5.2.2 Tolerance of Scale Differences . . . . . . . . . . . . . . . . . . 64 5.2.3 Experimental Results in Multiple-waypoint Route . . . . . . . 66 5.2.4 Comparison to Local Visual Homing based on Epipolar Ge- ometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.3 Performance Evaluations of Localization System with Cloud Computing 70 5.3.1 Environments for Experiments and Cloud . . . . . . . . . . . 71 5.3.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . 73 6 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 80 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

    [1] S. Thrun, “Learning metric-topological maps for indoor mobile robot naviga-
    tion,” Artificial Intelligence, vol. 99, no. 1, pp. 21–71, 1998.
    [2] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. The MIT Press,
    2005.
    [3] A. Elfes, “Sonar-based real-world mapping and navigation,” IEEE journal of
    robotics and automation, vol. 3, no. 3, pp. 249–265, 1987.
    [4] H. Moravec, “Sensor fusion in certainty grids for mobile robots,” AI Magazine,
    vol. 9, no. 2, pp. 61–74, 1988.
    [5] J. J. Leonard and H. Durrant-Whyte, “Simultaneous map building and localization for an autonomous mobile robot,” in IEEE International Conference on
    Intelligent Robot Systems, pp. 1442–1447, 1991.
    [6] M. W. M. G. Dissanayake, P. Newman, S. Clark, H. Durrant-whyte, and
    M. Csorba, “A solution to the simultaneous localization and map building(slam)
    problem,” IEEE Transactions on Robotics and Automation, vol. 17, pp. 229–
    241, 2001.
    [7] H. Durrant-Whyte and T. Bailey, “Simultaneous localisation and mapping
    (slam): Part i the essential algorithms,” Robotics and Automation Magazine,
    vol. 13, no. 2, pp. 1–9, 2006.
    [8] T. Bailey and H. Durrant-Whyte, “Simultaneous localisation and mapping
    (slam): Part ii state of the art,” Robotics and Automation Magazine, vol. 13,
    no. 3, pp. 1–10, 2006.
    [9] B. Kuipers and Y. Byun, “A robot exploration and mapping strategy based
    on a semantic hierarchy of spatial representations,” Robotics and Autonomous
    Systems, vol. 8, no. 1-2, pp. 47–63, 1991.
    [10] H. Choset and K. Nagatani, “Topological simultaneous localization and map-
    ping (slam): Toward exact localization without explicit localization,” IEEE
    Transactions on Robotics and Automation, vol. 17, no. 2, pp. 125–137, 2001.
    [11] M. O. Franz, B. Scholkopf, H. A. Mallot, and H. H. Bulthoff, “Learning view
    graphs for robot navigation,” Autonomous Robots, vol. 5, no. 1, pp. 111–125,
    1998.
    [12] J. Gaspar, N. Winters, and J. Santos-Victor, “Vision-based navigation and
    environmental representations with an omnidirectional camera,” IEEE Trans-
    actions on Robotics and Automation, vol. 16, no. 6, pp. 890–898, 2000.
    [13] M. Cummins and P. Newman, “Appearance-only slam at large scale with fab-
    map 2.0,” International Journal of Robotics Research, vol. 30, no. 9, pp. 1100–
    1123, 2011.
    [14] Z. Chen and S. T. Birchfield, “Qualitative vision-based mobile robot naviga-
    tion,” in Proceedings of the IEEE International Conference on Robotics and
    Automation, pp. 2686–2692, 2006.
    [15] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Inter-
    national Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
    [16] H. M. Becerra and C. Sagues, “A sliding mode control law for epipolar visual
    servoing of differential-drive robots,” in Proceedings of the IEEE International
    Conference on Intelligent Robots and Systems, pp. 3058–3063, 2008.
    [17] H. M. Becerra, G. Lopez-Nicolas, and C. Sagues, “Omnidirectional visual con-
    trol of mobile robots based on the 1d trifocal tensor,” Robotics and Autonomous
    Systems, vol. 58, no. 6, pp. 796–808, 2010.
    [18] O. Booji, B. Terwijn, Z. Zivkovic, and B. Krose, “Navigation using an
    appearance-based topological map,” in Proceedings of the IEEE International
    Conference on Robotics and Automation, pp. 3927–3932, 2007.
    [19] J. Courbon, Y. Mezouar, and P. Martinet, “Indoor navigation of a non-
    holonomic mobile robot using a visual memory,” Autonomous Robots, vol. 25,
    no. 3, pp. 253–266, 2008.
    [20] J. Courbon, Y. Mezouar, and P. Martinet, “Autonomous navigation of vehicles
    from a visual memory using a generic camera model,” IEEE Transactions on
    Intelligent Transportation Systems, vol. 10, no. 3, pp. 392–402, 2009.
    [21] G. Erinc and S. Carpin, “Image-based mapping and navigation with heteroge-
    nous robots,” in Proceedings of the IEEE International Conference on Intelli-
    gent Robots and Systems, pp. 5807–5814, 2009.
    [22] F. Fraundorfer, C. Engels, and D. Nister, “Topological mapping, localization
    and navigation using image collections,” in Proceedings of the IEEE Interna-
    tional Conference on Intelligent Robots and Systems, pp. 3872–3877, 2007.
    [23] Y. Fu, T.-R. Hsiang, and S.-L. Chung, “Robot navigation using image se-
    quences,” in Proceedings of the 6th International Conference on Ubiquitous
    Robots and Ambient Intelligence, pp. 163–167, 2009.
    [24] T. Goedeme, M. Nuttin, T. Tuytelaars, and L. Van Gool, “Omnidirectional
    vision based topological navigation,” International Journal of Computer Vision,
    vol. 74, no. 3, pp. 219–236, 2007.
    [25] Lopez-Nicolas, C. Sagues, J. J. Guerrero, D. Kragic, and P. Jensfelt, “Switching
    visual control based on epipoles for mobile robots,” Robotics and Autonomous
    Systems, vol. 56, no. 7, pp. 592–603, 2008.
    [26] G. Lopez-Nicolas, N. R. Gans, S. Bhattacharya, C. Sagues, J. J. Guerrero,
    and S. Hutchinson, “Homography-based control scheme for mobile robots with
    nonholonomic and field-of-view constraints,” IEEE Transactions on Systems,
    Man, and Cybernetics, Part B: Cybernetics, vol. PP, no. 99, pp. 1–13, 2009.
    [27] G. Lopez-Nicolas, J. J. Guerrero, and C. Sagues, “Visual control of vehicles
    using two-view geometry,” Mechatronics, vol. 20, no. 2, pp. 315–325, 2010.
    [28] G. Lopez-Nicolas, J. J. Guerrero, and C. Sagues, “Visual control through the
    trifocal tensor for nonholonomic robots,” Robotics and Autonomous Systems,
    vol. 58, no. 2, pp. 216–226, 2010.
    [29] G. Lopez-Nicolas, J. J. Guerrero, and C. Sagues, “Multiple homographies with
    omnidirectional vision for robot homing,” Robotics and Autonomous Systems,
    vol. 58, no. 6, pp. 773–783, 2010.
    [30] G. L. Mariottini, G. Oriolo, and D. Prattichizzo, “Image-based visual servoing
    for nonholonomic mobile robots using epipolar geometry,” IEEE Transactions
    on Robotics, vol. 23, no. 1, pp. 87–100, 2007.
    [31] A. Remazeilles and F. Chaumette, “Image-based robot navigation from an im-
    age memory,” Robotics and Autonomous Systems, vol. 55, no. 4, pp. 345–356,
    2007.
    [32] C. Sagues and J. J. Guerrero, “Visual correction for mobile robot homing,”
    Robotics and Autonomous Systems, vol. 50, pp. 41–49, 2005.
    [33] A. A. Argyros, K. E. Bekris, S. C. Orphanoudakis, and L. E. Kavraki, “Robot
    homing by exploiting panoramic vision,” Autonomous Robots, vol. 19, no. 1,
    pp. 7–25, 2005.
    [34] M. Liu, C. Pradalier, Q. Chen, and R. Siegwart, “A bearing-only 2d/3d-homing
    method under a visual servoing framework,” in Proceedings of the IEEE Inter-
    national Conference on Robotics and Automation, 2010.
    [35] D. Fontanelli, A. Danesi, F. A. W. Belo, P. Salaris, and A. Bicchi, “Visual
    servoing in the large,” The International Journal of Robotics Research, vol. 28,
    no. 6, pp. 802–814, 2009.
    [36] E. Royer, M. Lhuillier, M. Dhome, and J.-M. Lavest, “Monocular vision for
    mobile robot localization and autonomous navigation,” International Journal
    of Computer Vision, vol. 74, no. 3, pp. 237–260, 2007.
    [37] S. Segvic, A. Remazeilles, A. Diosi, and F. Chaumette, “Large scale vision-
    based navigation without an accurate global reconstruction,” in Proceedings of
    85the IEEE International Conference on Computer Vision and Pattern Recogni-
    tion, vol. 0, pp. 1–8, 2007.
    [38] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequence
    route representation,” in Proceedings of the IEEE International Conference on
    Robotics and Automation, vol. 1, pp. 83–88, 1996.
    [39] Z. Chen and S. T. Birchfield, “Qualitative vision-based path following,” IEEE
    Transactions on Robotics, vol. 25, no. 3, pp. 749–754, 2009.
    [40] J. Ido, Y. Shimizu, Y. Matsumoto, and T. Ogasawara, “Indoor navigation for a
    humanoid robot using a view sequence,” The International Journal of Robotics
    Research, vol. 28, no. 2, pp. 315–325, 2009.
    [41] A. Cherubini, M. Colafrancesco, G. Oriolo, L. Freda, and F. Chaumette, “Com-
    paring appearance-based controllers for nonholonomic navigation from a visual
    memory,” in ICRA 2009 Workshop on safe navigation in open and dynamic
    environments: application to autonomous vehicles, 2009.
    [42] R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision.
    second ed., 2004.
    [43] A. Vardy, “Long-range visual homing,” in Proceedings of the IEEE International
    Conference on Robotics and Biomimetics, pp. 220–226, 2006.
    [44] A. M. Zhang and L. Kleeman, “Robust appearance based visual route following
    for navigation in large-scale outdoor environments,” International Journal of
    Robotics Research, vol. 28, no. 3, pp. 331–356, 2009.
    [45] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for
    model fitting with applications to image analysis and automated cartography,”
    Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
    [46] Y. Matsumoto, M. Inaba, and H. Inoue, “View-based navigation using an om-
    niview sequence in a corridor environment,” Machine Vision and Applications,
    vol. 14, no. 2, pp. 121–128, 2003.
    [47] E. L. Schwartz, “Computational anatomy and functional architecture of the
    striate cortex: a spatial mapping approach to perceptual coding,” Vision Re-
    search, vol. 20, pp. 645–669, 1980.
    [48] V. J. Traver and A. Bernardino, “A review of log-polar imaging for visual per-
    ception in robotics,” Robotics and Autonomous Systems, vol. 58, no. 4, pp. 378–
    398, 2010.
    [49] S. Zokai and G. Wolberg, “Image registration using log-polar mappings for
    recovery of large-scale similarity and projective transformations,” IEEE Trans-
    actions on Image Processing, vol. 14, no. 10, pp. 1422–1434, 2005.
    [50] Y. Fu, T.-R. Hsiang, and S.-L. Chung, “Seqence-based autonomous robot nav-
    igation by log-polar image transform and epipolar geometry,” in Proceedings of
    the IEEE International Conference on System Man and Cybernetics, pp. 493–
    499, 2010.
    [51] B. N. Schilit, N. Adams, and R. Want, “Context-aware computing applica-
    tions,” in Proceedings of IEEE Workshop on Mobile Computing Systems and
    Applications, pp. 85–90, 1994.
    [52] J. Hightower and G. Borriello, “Location systems for ubiquitous computing,”
    Computer, vol. 34, no. 8, pp. 57–66, 2001.
    [53] D. Hahnel, W. Burgard, D. Fox, K. Fishkin, and M. Philipose, “Mapping and
    localization with rfid technology,” in Proceedings of the IEEE International
    Conference on Robotics and Automation, vol. 1, pp. 1015–1020, 2004.
    [54] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust Monte Carlo Local-
    ization for mobile robots,” Artificial Intelligence, vol. 128, no. 1-2, pp. 99–141,
    2001.
    [55] D. Joho, C. Plagemann, and W. Burgard, “Modeling rfid signal strength and
    tag detection for localization and mapping,” in Proceedings of the IEEE Inter-
    national Conference on Robotics and Automation, pp. 3160–3165, 2009.
    [56] H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor position-
    ing techniques and systems,” IEEE Transactions on Systems, Man, Cybernet-
    ics, Part C: Applications and Reviews, vol. 37, no. 6, pp. 1067–1080, 2007.
    [57] J. D. Tardos, J. Neira, P. Newman, and J. J. Leonard, “Robust mapping and
    localization in indoor environments using sonar data,” International Journal of
    Robotics Research, vol. 21, no. 4, pp. 311–340, 2002.
    [58] D. Fox, W. Burgard, H. Kruppa, and S. Thrun, “Probabilistic approach to col-
    laborative multi-robot localization,” Autonomous Robots, vol. 8, no. 3, pp. 325–
    344, 2000.
    [59] F. Lu and E. Milios, “Robot pose estimation in unknown environments by
    matching 2d range scans,” Journal of Intelligent and Robotic Systems: Theory
    and Applications, vol. 18, no. 3, pp. 249–275, 1997.
    [60] J. Wolf, W. Burgard, and H. Burkhardt, “Robust vision-based localization
    by combining an image-retrieval system with monte carlo localization,” IEEE
    Transactions on Robotics, vol. 21, no. 2, pp. 208–216, 2005.
    [61] A. Pretto, E. Menegatti, Y. Jitsukawa, R. Ueda, and T. Arai, “Image simi-
    larity based on discrete wavelet transform for robots with low-computational
    resources,” Robotics and Autonomous Systems, vol. 58, no. 7, pp. 879–888, 2010.
    [62] J. J. Leonard and H. F. Durrant-Whyte, “Mobile robot localization by tracking
    geometric beacons,” IEEE Transactions on Robotics and Automation, vol. 7,
    no. 3, pp. 376–382, 1991.
    [63] J. Wang, H. Zha, and R. Cipolla, “Coarse-to-fine vision-based localization by
    indexing scale-invariant features,” IEEE Transactions on Systems, Man, and
    Cybernetics, Part B: Cybernetics, vol. 36, no. 2, pp. 413–422, 2006.
    [64] S. Rady, A. Wagner, and E. Badreddin, “Hierarchical localization using
    entropy-based feature map and triangulation techniques,” in Proceedings of the
    IEEE International Conference on Systems, Man and Cybernetics, pp. 519–525,
    2010.
    [65] B. Bacca, J. Salvi, and X. Cufi, “Appearance-based mapping and localiza-
    tion for mobile robots using a feature stability histogram,” Robotics and Au-
    tonomous Systems, vol. 59, no. 10, pp. 840–857, 2011.
    [66] Y. Fu, S. Tully, G. Kantor, and H. Choset, “Monte carlo localization using 3d
    texture maps,” in Proceedings of the IEEE/RSJ International Conference on
    Intelligent Robots and Systems, pp. 482–487, 2011.
    [67] H. Fang, M. Yang, R. Yang, and C. Wan, “Ground-texture-based localization
    for intelligent vehicles,” IEEE Transactions on Intelligent Transportation Sys-
    tems, vol. 10, no. 3, pp. 463–468, 2009.
    [68] G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual cat-
    egorization with bags of keypoints,” in Workshop on Statistical Learning in
    Computer Vision, pp. 1–22, 2004.
    [69] Y. Okamoto, T. Oishi, and K. Ikeuchi, “Image-based network rendering of
    large meshes for cloud computing,” International Journal of Computer Vision,
    vol. 94, no. 1, pp. 12–22, 2011.
    [70] S. Pandey, W. Voorsluys, S. Niu, A. Khandoker, and R. Buyya, “An autonomic
    cloud environment for hosting ecg data analysis services,” Future Generation
    Computer Systems, vol. 28, no. 1, pp. 147–154, 2012.
    [71] R. Arumugam, V. R. Enti, L. Bingbing, W. Xiaojun, K. Baskaran, F. F. Kong,
    A. S. Kumar, K. D. Meng, and G. W. Kit, “DAvinCi: A cloud computing
    framework for service robots,” in Proceedings of the IEEE International Con-
    ference on Robotics and Automation, pp. 3084–3089, 2010.
    [72] S. N. Srirama, P. Jakovits, and E. Vainikko, “Adapting scientific computing
    problems to clouds using mapreduce,” Future Generation Computer Systems,
    vol. 28, no. 1, pp. 184–192, 2012.
    [73] L. Douadi, M.-J. Aldon, and A. Crosnier, “Pair-wise registration of 3d/color
    data sets with icp,” in Proceedings of the 2009 IEEE/RSJ international con-
    ference on Intelligent robots and systems, pp. 663–668, 2006.
    [74] H. Andreasson and A. J. Lilienthal, “6d scan registration using depth-
    interpolated local image features,” Robotics and Autonomous Systems, vol. 58,
    no. 2, pp. 157–165, 2010.
    [75] P. Henry, M. Krainin, E. Herbst, X.-F. Ren, and D. Fox, “Rgbd mapping: using
    depth cameras for dense 3d modeling of indoor environments,” in Proceedings
    of the 12th International Symposium on Experimental Robotics, 2010.
    [76] F. Lu and E. Milios, “Globally consistent range scan alignment for environment
    mapping,” Autonomous Robots, vol. 4, no. 4, pp. 333–349, 1997.
    [77] C.-C. Wu, “SiftGPU: A GPU implementation of scale invariant feature trans-
    form (SIFT).” http://cs.unc.edu/~ccwu/siftgpu, 2007.
    [78] “MRPT: The mobile robot programming toolkit.” http://www.mrpt.org/.
    [79] G. Yu and J.-M. Morel, “ASIFT: An algorithm for fully affine invariant com-
    parison,” Image Processing On Line, 2011.
    [80] M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE
    Transactions on Robotics and Automation, vol. 13, no. 2, pp. 251 –263, 1997.
    [81] “Hadoop: Apache hadoop software library.” http://hadoop.apache.org/.
    [82] “U-bot vendor.” From the World Wide Web:
    http://www.atechsystem.com.tw/.
    [83] “Mechanical and systems research laboratories in industrial technology research
    institute.” From the World Wide Web: http://www.itri.org.tw/eng/MSL/.
    [84] “Logitech.” From the World Wide Web: http://www.logitech.com/en-
    us/435/238.
    [85] G. Bradski, “The opencv library,” Dr. Dobb’s Journal of Software Tools, 2000.
    [86] M. Cummins and P. Newman, “FAB-MAP: probabilistic localization and map-
    ping in the space of appearance,” The International Journal of Robotics Re-
    search, vol. 27, no. 6, pp. 647–665, 2008.
    [87] “National center for high-performance computing, NCHC.” http://hadoop.
    nchc.org.tw/.
    [88] A. Pronobis and B. Caputo, “COLD: COsy Localization Database,” The In-
    ternational Journal of Robotics Research, vol. 28, no. 5, pp. 588–594, 2009.
    [89] “Camera Calibration Toolbox for Matlab.” http://www.vision.caltech.
    edu/bouguetj/calib_doc/.

    QR CODE