研究生: |
尹克清 Christian - Ivancsits |
---|---|
論文名稱: |
應用視覺導航系統於小型無人飛行載具 Visual Navigation System for Small Unmanned Aerial Vehicles |
指導教授: |
李敏凡
Min-Fan Ricky Lee |
口試委員: |
蔡明忠
Ming-Jong Tsai 楊亦東 I-Tung Yang |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 自動化及控制研究所 Graduate Institute of Automation and Control |
論文出版年: | 2010 |
畢業學年度: | 98 |
語文別: | 英文 |
論文頁數: | 136 |
外文關鍵詞: | Machine vision, visual odometry, robust feature tracking, absolute orientation, SIFT, RANSAC, autonomous navigation, unmanned aerial vehicle, networked control system |
相關次數: | 點閱:484 下載:9 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
In recent years, small Unmanned Aerial Vehicles (UAVs) have experienced a strong boost in performance, opening the prospect to several military and civil applications, such as surveillance, monitoring, and inspection. However, the lack of effective autonomous navigation abilities has severely limited the opportunities for deployment. Visual navigation methods are attractive candidates because of the small weight of video cameras. The major issues in the development of a visual navigation system for small UAVs can be characterized as follows: 1) technical constraints, 2) robust image feature matching, 3) efficient and precise method for visual navigation. This thesis addresses these three issues, provides methods for their solution, and evaluates their feasibility and effectiveness.
The technical constraints of small UAVs inhibit on-board computation of visual navigation. This limitation can be overcome with the proposed wireless networked control system, which out-sources the data processing from the UAV to a ground-based process computer. The feature matching, which represents the font-end of all feature based visual navigation methods, is addressed with a robust method based on SIFT feature descriptors, which achieves real-time performance by detaching the explicit scale invariance of image features. The presented navigation concept implements a visual odometry system with a single calibrated camera. The proposed method uses a framework for incremental reconstruction of the camera path and the structure of the environment based on two-view epipolar geometry, followed by sparse bundle adjustment.
The concept for a wireless networked control system was evaluated with latency- and throughput measurements in different environments. The experiment setup conforming to the IEEE 802.11n standard achieves an average latency of 1.3 ms and a data throughput of 3.000 kB/s up to a distance of 70 m. The results demonstrate the feasibility of real-time closed-loop navigation control with the proposed concept.
The presented feature matching method was tested with ten frames of a benchmark image sequence. The evaluation shows similar results compared with SIFT in the number of feature correspondences, and superior performance with respect to the number of false feature matches when applied to visual navigation. The proposed method for robust feature matching achieves up to 8.4 times faster computation compared to SIFT on images of size 640×480 pixels.
The visual odometry was evaluated with real-life image sequences. The proposed method achieved an error of 1.65% with respect to the total path length of 9.43 m on a circular trajectory. The reconstruction from 840 images includes 42 camera positions and 2113 3D world points.
[1] S. Thrun, et al., Probabilistic Robotics (Intelligent Robotics and Autonomous Agents): The MIT Press, 2005.
[2] D. G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60, pp. 91--110, 2004.
[3] A. J. Davison, "Real-Time Simultaneous Localisation and Mapping with a Single Camera," presented at the Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2, 2003.
[4] A. J. Davison, et al., "REAL-TIME 3D SLAM WITH WIDE-ANGLE VISION," presented at the 5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles, 2004.
[5] A. J. Davison, et al., "MonoSLAM: Real-Time Single Camera SLAM," IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, pp. 1052-1067, 2007.
[6] D. Chekhlov, et al., "Real-time and robust monocular SLAM using predictive multi-resolution descriptors," in In 2nd International Symposium on Visual Computing, 2006.
[7] K. Celik, et al., "Monocular vision SLAM for indoor aerial vehicles," presented at the Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, St. Louis, MO, USA, 2009.
[8] D. Nistér, et al., "Visual odometry," presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 2004.
[9] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Commun. ACM, vol. 24, pp. 381-395, 1981.
[10] J. Campbell, et al., "A Robust Visual Odometry and Precipice Detection System Using Consumergrade Monocular Vision," in Proceedings of the 2005 IEEE International Conference on Robotics and Automation ICRA, 2005, pp. 3421--3427.
[11] N. Sünderhauf, et al., "Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle," in Tagungsband Autonome Mobile Systeme, 2005.
[12] M. I. A. Lourakis and A. A. Argyros, "SBA: A software package for generic sparse bundle adjustment," ACM Trans. Math. Softw., vol. 36, pp. 1-30, 2009.
[13] J. F. Kurose and K. W. Ross, Computer Networking: A Top-Down Approach vol. 4: Addison-Wesley Publishing Company, 2009.
[14] H. Moravec, "Towards Automatic Visual Obstacle Avoidance," in Proceedings of the 5th International Joint Conference on Artificial Intelligence, 1977, p. 584.
[15] C. Harris and M. Stephens, "A Combined Corner and Edge Detection," in Proceedings of The Fourth Alvey Vision Conference, 1988, pp. 147--151.
[16] J. Shi and C. Tomasi, "Good Features to Track," presented at the 1994 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'94), 1994.
[17] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision: Cambridge University Press, 2003.
[18] G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion, and Object Recognition: A Unified Approach: Kluwer Academic Publishers, 1996.
[19] B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," presented at the Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2, Vancouver, BC, Canada, 1981.
[20] C. Tomasi and T. Kanade, "Detection and Tracking of Point Features," International Journal of Computer Vision, 1991.
[21] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library vol. 1: O'Reilly Media, 2008.
[22] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, pp. 1330-1334, 2000.
[23] D. C. Brown, "Close-range camera calibration," PHOTOGRAMMETRIC ENGINEERING, vol. 37, pp. 855--866, 1971.
[24] R. I. Hartley, "In defence of the 8-point algorithm," presented at the Proceedings of the Fifth International Conference on Computer Vision, 1995.
[25] B. K. P. Horn, "Closed-form solution of absolute orientation using unit quaternions," Journal of the Optical Society of America A, vol. 4, pp. 629--642, 1987