簡易檢索 / 詳目顯示

研究生: 洪瑋胤
Yin-Wei Hong
論文名稱: 基於人工智慧之無人飛行機目標量測
Target measurement of unmanned aerial vehicles based on artificial intelligence
指導教授: 李敏凡
Min-Fan Lee
口試委員: 柯正浩
Zheng-Hao Ke
蔡政安
Zhen-Gan Cai
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 61
中文關鍵詞: 人工智慧影像處理立體視覺尺寸量測無人飛行機
外文關鍵詞: Artificial Intelligence, Image processing, Stereo Vision, Size Measurement, Unmanned Aerial Vehicle
相關次數: 點閱:644下載:43
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   本論文之主要研究內容,是利用裝備單一相機鏡頭的無人飛行機,以及立體視覺、影像處理技術,來測量目標物的尺寸(最小包圍矩形)和目標物相對於無人機之間的距離。目前有許多應用在各種形態機器人上的感測器,來感知周圍的物體或障礙物,例如常見的雷射和聲納…等,但有別於其他感測器,相機鏡頭的成本更加低廉、裝備容易,且幾乎所有無人飛行機上都有裝置,在研究中的第一部分,我們利用相機作為主要感測器,來測量需要的實驗數據。實驗結果顯示,在沒有過多的外在因素干擾下(例如亂流、光線變化、其他未知物體),測量出來的結果和實際誤差可以控制在20%至30%之內。
      本文的第二部分,是利用機器學習中的決策樹以及神經網路算法來進行影像處理,辨識出目標物之後再用無人飛行機進行追蹤。傳統的神經網路在進行目標辨識時,輸入層都是目標影像的像素數目,因此輸入層的數量會非常龐大,導致學習時間過長。而由於無人飛行機的飛行時間非常有限,因此為了在其上實現實時的目標辨識以及追蹤,本文改變了神經網路的輸入層設定,其輸入層不是辨識目標的像素值,而是目標影像的特徵,諸如影像亮度分布值、RGB直條圖、輪廓面積、影像角點...等,而決策樹演算法部分的輸入也是上述所提到的影像特徵。雖然此作法的神經網路的輸入層和隱藏層數並不多,而且輸入的訓練數據量也很有限,但在無人機飛行過程中,針對目標辨識的結果還是能有不錯的精度。
      而本文的第三部分,提出了一種平均值演算法,專門用於利用單一相機拍攝影像並且測量距離之無人飛行機,改善其測量結果的精確度。大多數的立體視覺測距研究,都是配備兩台相機,並且同時拍攝照片,然後利用三角函數、相機焦距、以及兩相機之間的距離,來測量出與目標物之間的距離,而對於許多僅配備單一相機的無人機來說,要達到立體視覺測距的效果,就必須在不同的兩個位置拍攝照片,但有別於路上型機器人,無人飛行機在飛行時往往會受到亂流影響,使其在拍攝照片時的影像有過大的誤差,讓測量結果的精度下降,所以此部分提出了一種平均值演算法,能有效提升測量結果的精確度,並且讓誤差的大小維持在一定的平均值。


    In many unmanned aerial vehicle(We call it UAV below) missions, we need to know the size of the target that interact with. For example, when a UAV is going through a window or an irregular obstacle to enter a building, the size of the window is large enough for the drone to enter. However, in the above case, if the target to be measured is located in a place that is hard for human to reached , it is difficult to accurately measure the size using a normal camera.
    The main research content of this thesis is to use the UAV equipped with a single camera lens , as well as stereo vision , image processing, artificial intelligence and an average algorithm we designed etc. to measure the size of the target (minimum enveloping rectangle) Because UAV can move to more places than humans or other types of robots. Therefore, the problems mentioned above can be overcome.
    The experimental results show that the measured results and actual errors can be controlled within 10% to 15% if without excessive external factors (such as turbulence, light changes, and other unknown)

    摘要.........................................................1 Abstract.....................................................3 Chapter 1-Introduction......................................4 Chapter 2-Distance and Size Measuring.......................6 1. Distance Measuring:..............................8 2. Size Measuring:..................................9 3. Target lock:....................................10 4. Overall structure and operation mode:............11 Chapter 3-Target Tracking Algorithm........................12 1. Image Brightness Histogram.......................13 2. Image RGB three-color histogram and comparison...14 3. Image Features Contour and Area..................16 4. Corner Feature Detection.........................19 5. Decision tree algorithm:.........................21 6. Artificial Neural Network:.......................22 Chapter 4-Average Algorithm................................23 Chapter 5-Results..........................................30 1. Distance and Size Measuring:....................30 2. Target Tracking Algorithm:......................36 3. Average Algorithm:..............................43 Chapter 6-Conclusion.......................................48 References..................................................50

    [1]S. Boonkwang and S. Saiyod, "Distance measurement using 3D stereoscopic technique for robot eyes," in 2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), 2015, pp. 232-236.
    [2]J. Mrovlje and D. Vrančić, "Distance measuring based on stereoscopic pictures.pdf>," ystems and Control, October 2008.
    [3]Y. M. Mustafah, R. Noor, H. Hasbi, and A. W. Azma, "Stereo vision images processing for real-time object distance and size measurements," in 2012 International Conference on Computer and Communication Engineering (ICCCE), 2012, pp. 659-663.
    [4]F. Shahdib, M. Wali Ullah Bhuiyan, M. Kamrul Hasan, and H. Mahmud, Obstacle Detection and Object Size Measurement for Autonomous Mobile Robot using Sensor. 2013, pp. 28-33.
    [5]H. S. Baek, J. M. Choi, and B. S. Lee, "Improvement of Distance Measurement Algorithm on Stereo Vision System(SVS)," in 2010 Proceedings of the 5th International Conference on Ubiquitous Information Technologies and Applications, 2010, pp. 1-3.
    [6]X. Feng, J. Liu, Y. Deng, and S. Xu, "The Measurement of Three-Dimensional Points Based on the Single Camera Stereo Vision Sensor," in 2017 10th International Conference on Intelligent Computation Technology and Automation (ICICTA), 2017, pp. 278-281.
    [7]C. Holzmann and M. Hochgatterer, "Measuring Distance with Mobile Phones Using Single-Camera Stereo Vision," in 2012 32nd International Conference on Distributed Computing Systems Workshops, 2012, pp. 88-93.
    [8]C. Jaramillo, L. Guo, and J. Xiao, "A single-camera omni-stereo vision system for 3D perception of micro aerial vehicles (MAVs)," in 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), 2013, pp. 1409-1414.
    [9]M. S. Kang, C. h. Lee, B. M. You, and Y. S. Chung, "A 3D object measurement method using a single view camera," in 2015 International Conference on Information and Communication Technology Convergence (ICTC), 2015, pp. 790-792.
    [10]R. G. Lins, S. N. Givigi, and P. R. G. Kurka, "Vision-Based Measurement for Localization of Objects in 3-D for Robotic Applications," IEEE Transactions on Instrumentation and Measurement, vol. 64, no. 11, pp. 2950-2958, 2015.
    [11]Y. Sooyeong and N. Ahuja, "An Omnidirectional Stereo Vision System Using a Single Camera," in 18th International Conference on Pattern Recognition (ICPR'06), 2006, vol. 4, pp. 861-865.
    [12]G. P. Stein, O. Mano, and A. Shashua, "Vision-based ACC with a single camera: bounds on range and range rate accuracy," in IEEE IV2003 Intelligent Vehicles Symposium. Proceedings (Cat. No.03TH8683), 2003, pp. 120-125.
    [13]A. Cherubini and F. Chaumette, "Visual navigation of a mobile robot with laser-based collision avoidance," The International Journal of Robotics Research, vol. 32, no. 2, pp. 189-205, 2012.
    [14]J. Fan, W. Xu, Y. Wu, and Y. Gong, "Human Tracking Using Convolutional Neural Networks," IEEE Transactions on Neural Networks, vol. 21, no. 10, pp. 1610-1623, 2010.
    [15]J. Hu, J. Lu, and Y. P. Tan, "Deep Metric Learning for Visual Tracking," IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 11, pp. 2056-2068, 2016.
    [16]A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi, "Robust online appearance models for visual tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1296-1311, 2003.
    [17]Z. Kalal, K. Mikolajczyk, and J. Matas, "Tracking-Learning-Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1409-1422, 2012.
    [18]J. R. Quinlan, "Decision trees and decision-making," IEEE Transactions on Systems, Man, and Cybernetics, vol. 20, no. 2, pp. 339-346, 1990.
    [19]C. Shen, J. Kim, and H. Wang, "Generalized Kernel-Based Visual Tracking," IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 1, pp. 119-130, 2010.
    [20]A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, "Visual Tracking: An Experimental Survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442-1468, 2014.
    [21]W. Zhong, H. Lu, and M. H. Yang, "Robust Object Tracking via Sparse Collaborative Appearance Model," IEEE Transactions on Image Processing, vol. 23, no. 5, pp. 2356-2368, 2014.
    [22]J. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, 1986.
    [23]S. Jianbo and C. Tomasi, "Good features to track," in 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1994, pp. 593-600.
    M. Sereewattana, M. Ruchanurucks, P. Rakprayoon, S. Siddhichai, and S. Hasegawa, "Automatic landing for fixed-wing UAV using stereo vision with a single camera and an orientation sensor: A concept," in 2015 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), 2015, pp. 29-34.
    [24]N. Yamaguti, S. Oe, and K. Terada, "A method of distance measurement by using monocular camera," in Proceedings of the 36th SICE Annual Conference. International Session Papers, 1997, pp. 1255-1260.
    [25]F. Bonin-Font, A. Ortiz, and G. Oliver, "Visual Navigation for Mobile Robots: A Survey," Journal of Intelligent and Robotic Systems, vol. 53, no. 3, pp. 263-296, 2008.

    QR CODE