簡易檢索 / 詳目顯示

研究生: BUI QUANG HUY
BUI - QUANG HUY
論文名稱: 以智慧手機為基礎的室內輪型機器人視覺定位同步與建圖
Indoor Navigation with Smartphone-based Visual SLAM and Bluetooth-connected Wheel-Robot
指導教授: 高維文
Wei-Wen Kao
口試委員: 徐繼聖
Gee-Sern Hsu
陳亮光
Liang-Kuang Chen
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 英文
論文頁數: 74
中文關鍵詞: Indoor positioningsmartphone positioningVisual SLAM
外文關鍵詞: Indoor positioning, smartphone positioning, Visual SLAM
相關次數: 點閱:262下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Indoor Robot Navigation System is the problem of integrating sensor measurements to determine the robot and landmarks location using SLAM technique. Indoor navigation in an unknown environment for a robot can be realized by using Visual SLAM technique, in which image sensor measurements taken at different locations with respect to various landmarks are integrated with other robot motion sensors to determine the robot and landmark positions simultaneously. This is typically implemented with expensive image sensors and dedicated hardware and required extensive computing power. In recent years, smartphones have become the portal devices for consumers to access Internet and cloud-based IT services under mobile computing environments. In current smartphone setting, only GPS as well as cellular network and Wi-Fi network are utilized to provide position information in outdoor and indoor locations with moderate accuracy. With digital camera and MEMS-based motion sensors becoming standard features and computing power being improved rapidly, smartphones have the potential to play key roles in realizing indoor robot navigation.
    A smartphone-based positioning system suitable for indoor robot application is proposed in this research by integrating sensors available on a smartphone with wheel odometer feedback from a Bluetooth-connected robot vehicle. In this system, the smartphone is attached to a wheel-robot and the robot is performing only 2-dimensional planar motions in indoor home or office environments. The planar motion assumption greatly reduces the complexity of the navigation system equations. WiFi signal and the inertial sensors from the smartphone were considered to provide additional measurements for positioning calculations. But they were determined to be inadequate for the application in this thesis. The odometer sensors in the wheel-robot are used in the robot motion equations. For the phone camera measurements, the real-time performances of SIFT, SURF, and ORB feature detection and tracking algorithms are compared and one was chosen to implement on the smartphone. A state convergence analysis was conducted based on the result of a Visual SLAM experiment.


    Indoor Robot Navigation System is the problem of integrating sensor measurements to determine the robot and landmarks location using SLAM technique. Indoor navigation in an unknown environment for a robot can be realized by using Visual SLAM technique, in which image sensor measurements taken at different locations with respect to various landmarks are integrated with other robot motion sensors to determine the robot and landmark positions simultaneously. This is typically implemented with expensive image sensors and dedicated hardware and required extensive computing power. In recent years, smartphones have become the portal devices for consumers to access Internet and cloud-based IT services under mobile computing environments. In current smartphone setting, only GPS as well as cellular network and Wi-Fi network are utilized to provide position information in outdoor and indoor locations with moderate accuracy. With digital camera and MEMS-based motion sensors becoming standard features and computing power being improved rapidly, smartphones have the potential to play key roles in realizing indoor robot navigation.
    A smartphone-based positioning system suitable for indoor robot application is proposed in this research by integrating sensors available on a smartphone with wheel odometer feedback from a Bluetooth-connected robot vehicle. In this system, the smartphone is attached to a wheel-robot and the robot is performing only 2-dimensional planar motions in indoor home or office environments. The planar motion assumption greatly reduces the complexity of the navigation system equations. WiFi signal and the inertial sensors from the smartphone were considered to provide additional measurements for positioning calculations. But they were determined to be inadequate for the application in this thesis. The odometer sensors in the wheel-robot are used in the robot motion equations. For the phone camera measurements, the real-time performances of SIFT, SURF, and ORB feature detection and tracking algorithms are compared and one was chosen to implement on the smartphone. A state convergence analysis was conducted based on the result of a Visual SLAM experiment.

    Abstracti Acknowledgementsii Table of Contentsiii List of Figuresvi List of Tablesviii Chapter 1Introduction1 1.1.Indoor Navigation with Smartphones1 1.2.Related works1 1.2.1.Navigation problems with smartphones1 1.2.2.Descriptors for feature detection and matching2 1.3.Thesis organization3 Chapter 2Smartphone’s Sensors4 2.1.WiFi4 2.1.1.Literature review4 2.1.2.WiFi k-NN positioning4 2.1.3.Experiment result & conclusion7 2.2.Accelerometer10 2.3.Gyroscope11 2.4.Smartphone’s Camera12 2.4.1.Camera specifications12 2.4.2.Camera calibration13 Chapter 3Visual SLAM with the smartphone15 3.1.SLAM problem15 3.1.1.What is SLAM?15 3.1.2.Visual SLAM16 3.2.Proposed Visual SLAM16 3.3.Robot motion equation16 3.4.Feature tracking algorithm comparison17 3.5.Visual SLAM equations20 3.5.1.State equation20 3.5.2.Measurement equation21 3.5.3.Jacobian matrixes calculation24 Chapter 4System Description29 4.1.System diagrams29 4.1.1.Connection diagram29 4.1.2.Operation diagram31 4.2.Hardware32 4.2.1.Stingray robot with encoders32 4.2.2.LG Optimus Black P970 Android smartphone34 4.2.3.Laser Ranger for measuring the real moving of the robot34 4.3.Software35 4.3.1.Propeller firmware for robot controller board35 4.3.2.Android development on smartphones35 4.3.3.OpenCV library36 4.3.4.Eigen C++ library36 Chapter 5Visual SLAM Experiment37 5.1.Experiment Descriptions37 5.2.Results38 5.2.1.Visual SLAM algorithm verification39 5.2.2.Robot states convergence41 5.2.3.States convergence for single feature44 5.2.4.States convergence for multiple features52 5.2.5.States convergence for both robot and features55 Chapter 6Discussion58 6.1.State convergence in Visual SLAM problem58 6.2.Limitations of experiment58 6.2.1.Lack of autonomous motion control for the robot58 6.2.2.Dynamic feature point list initialization58 6.3.Limitations of proposed system59 6.3.1.Images resolution and computational speed on smartphones59 6.3.2.Dependency on light condition59 6.3.3.Camera autofocus speed59 Chapter 7Conclusion60 7.1.Thesis contributions60 7.1.1.Analyzing of smartphone sensors with navigation problem60 7.1.2.A basic framework of system for testing visual related navigation algorithms60 7.1.3.Select feature tracking algorithms speed suitable for the smartphone61 7.1.4.State convergence analysis in proposed Visual SLAM61 7.2.Future works62 7.2.1.Create a good filter for feature detection and tracking result62 7.2.2.Apply the whole algorithm on the smartphone62 References63

    [1]S. Hilsenbeck, A. Moller, R. Huitl, G. Schroth, M. Kranz, and E. Steinbach, "Scale-preserving long-term visual odometry for indoor navigation," in 2012 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2012, November 13, 2012 - November 15, 2012, Sydney, NSW, Australia, 2012.
    [2]M. Werner, M. Kessel, and C. Marouane, "Indoor positioning using smartphone camera," in 2011 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2011, September 21, 2011 - September 23, 2011, Guimaraes, Portugal, 2011.
    [3]D. Kim, E. Hwang, and S. Rho, "Location-based large-scale landmark image recognition scheme for mobile Devices," in 2012 3rd FTRA International Conference on Mobile, Ubiquitous, and Intelligent Computing, MUSIC 2012, June 26, 2012 - June 28, 2012, Vancouver, BC, Canada, 2012, pp. 47-52.
    [4]D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
    [5]K. Matusiak and P. Skulimowski, "Comparison of key point detectors in SIFT implementation for mobile devices," in International Conference on Computer Vision and Graphics, ICCVG 2012, September 24, 2012 - September 26, 2012, Warsaw, Poland, 2012, pp. 509-516.
    [6]H. Bay, T. Tuytelaars, and L. Van Gool, "SURF: Speeded up robust features," in 9th European Conference on Computer Vision, ECCV 2006, May 7, 2006 - May 13, 2006, Graz, Austria, 2006, pp. 404-417.
    [7]E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in 2011 IEEE International Conference on Computer Vision, ICCV 2011, November 6, 2011 - November 13, 2011, Barcelona, Spain, 2011, pp. 2564-2571.
    [8]L. Tsung-Nan and L. Po-Chiang, "Performance comparison of indoor positioning techniques based on location fingerprinting in wireless networks," in Wireless Networks, Communications and Mobile Computing, 2005 International Conference on, 2005, pp. 1569-1574 vol.2.
    [9]K. Mouratidis, S. Bakiras, and D. Papadias, "Continuous Monitoring of Spatial Queries in Wireless Broadcast Environments," Mobile Computing, IEEE Transactions on, vol. 8, pp. 1297-1311, 2009.
    [10]L. Hui, H. Darabi, P. Banerjee, and L. Jing, "Survey of Wireless Indoor Positioning Techniques and Systems," Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 37, pp. 1067-1080, 2007.
    [11]W. M. Yeung and J. K. Ng, "Wireless LAN Positioning based on Received Signal Strength from Mobile device and Access Points," in Embedded and Real-Time Computing Systems and Applications, 2007. RTCSA 2007. 13th IEEE International Conference on, 2007, pp. 131-137.
    [12]Z. Zhang, "A flexible new technique for camera calibration," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, pp. 1330-1334, 2000.
    [13]OpenCV Developers Team. (2013). Open Source Computer Vision Library. Available: http://opencv.org/
    [14]A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, "MonoSLAM: Real-time single camera SLAM," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, pp. 1052-1067, 2007.
    [15]J. Civera, A. J. Davison, and J. M. M. Montiel, "Inverse depth parametrization for monocular SLAM," IEEE Transactions on Robotics, vol. 24, pp. 932-945, 2008.
    [16]Google Inc. (2013). Android Developers Website. Available: http://developer.android.com

    QR CODE