簡易檢索 / 詳目顯示

研究生: 林昆鋒
Kuen-Fong Lin
論文名稱: 深度學習之人形足球機器人機器視覺系統開發
Development of Humanoid Soccer Robot Machine Vision System with Deep Learning
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 翁慶昌
Ching-Chang Wong
鍾聖倫
Sheng-Luen Chung
項天瑞
Tien-Ruey Hsiang
花凱龍
Kai-Lung Hua
郭重顯
Chung-Hsien Kuo
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 67
中文關鍵詞: 機器視覺人型機器人深度學習物件辨識
外文關鍵詞: Machine vision, Humanoid robot, Deep learning, Object recognition
相關次數: 點閱:464下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  本論文以實際應用於RoboCup國際性機器人競賽為目的,提出一使用深度學習模型之雙足人形足球機器人機器視覺系統,達到機器視覺影像對足球與機器人認知,以及足球追蹤之目的。RoboCup國際機器人競賽舊制規定,競賽場地範圍內採用綠色為底色,場地上著有白色場域線,競賽中兩隊伍機器人著有紅色或藍色標帶,配合使用橘色足球與黃色球門進行比賽。因此,往年競賽中,與賽隊伍主要採取物件顏色之區別,作為主要目標物辨識之特徵。然而,隨著賽事規則逐年更動,比賽場地逐步轉化為一般足球賽之場景,2015年開始將足球更改為白色帶花紋之標準比賽用球,球門框顏色亦由黃色框變更為白色框,大幅度增加機器人辨識上之複雜度。為了解決上述機器人在交錯白線中能夠識別足球,降低賽前所需的參數調整,以及增加未來賽中物件更動的適應彈性,本論文之機器視覺系統以深度學習卷積神經網路開發,透過線下監度學習方式,使用You Only Look Once第二版的網路結構進行學習,達到足球類別與機器人類別在複雜環境中辨識之框選結果。此外,透過實驗深度學習模型框選足球區域面積與實際足球佔有影像區域面積數據,進行深度學習模型訓練之成效比較,並測試出框選準確度較佳之數據模型,經由機器人足球影像座標,將足球實際所在位置,轉換為機器人內部座標系定義,完成機器人在複雜場景或比賽場景中,足球影像之定位與追蹤實現,並於RoboCup 2017賽事中取得人形機器人足球賽第二名成績。


  This thesis proposes a deep learning based machine vision system for biped humanoid soccer robot. This system was developed for RoboCup international robotic competition to recognize objects belonging to two classes namely soccer ball and various humanoid robots. This machine vision system had to track the soccer ball automatically. In the RoboCup humanoid soccer event, field lines were white in color, soccer ball was white in color and the goal was white in color and the soccer field was green in color. Robots of two teams were differentiated with red and blue tapes. Since both the soccer ball and the field lines were white in color, it was very difficult to differentiate between the soccer ball and field lines when the ball was near the field lines by using conventional image processing methods. Hence, a deep neural network was used for recognizing the soccer ball in the field and to generate a bounding box around it. You Only Look Once is a deep learning object recognition algorithm used in this system. The predicted bounding box from deep learning model was collected to be compared with the ground truth data. This enabled to obtain more accurate results regarding the location of the soccer ball. The bounding box coordinates were used to compute the floor coordinates of the soccer ball with respect to the robot. Finally, this system had been tested in RoboCup 2017. Based on the proposed approaches, our team received the second place of the teen-size humanoid soccer game.

指導教授推薦書 口試委員會審定書 誌謝 摘要 Abstract 目錄 表目錄 圖目錄 符號說明 第一章 緒論 1.1 研究背景與動機 1.2 研究目的 1.3 文獻回顧 1.3.1 視覺影像相關研究 1.3.2 深度學習網路於機器人相關研究 1.3.3 定位系統相關研究 1.3.4 足球機器人相關研究 1.4 論文架構 第二章 系統架構 2.1 系統簡介 2.2 系統平台 2.2.1 機器視覺深度學習系統平台 2.2.2 人形機器人平台 2.3 系統流程 第三章 人形足球機器人視覺深度學習網路 3.1 深度學習卷積類神經網路架構 3.1.1 卷積類神經網路 3.1.2 卷積層 3.1.3 激活層 3.1.4 池化層 3.1.5 多層前饋類神經網路 3.2 You Only Look Once網路學習架構 3.2.1 卷積類神經網絡結構設計 3.2.2 卷積類神經網絡訓練流程 3.2.3 一次性通過物件偵測流程 3.3 學習成效測試 3.3.1 認知廣泛程度測試 3.3.2 應用場景測試 3.3.3 物件框選結果測試 第四章 足球機器人足球追蹤系統 4.1 目標物定位 4.2 抑制移動誤差 4.3 機器人追蹤行為 第五章 實驗結果與分析 5.1 物件辨識結果分析 5.1.1 足球物件框選結果測試 5.1.2 機器人物件認知測試 5.2 目標物定位分析 5.3 應用於RoboCup 2017 比賽情況 第六章 結論與未來研究 6.1 結論 6.2 未來研究方向 參考文獻

[1] P. Ganesan, V. Rajini, B. S. Sathish and K. B. Shaik, “HSV color space based segmentation of region of interest in satellite images”, International Conference on Control, Instrumentation, Communication and Computational Technologies, pp. 101 – 105, 2014.
[2] T. W. Chen, Y. L. Chen and S. Y. Chien, “Fast image segmentation based on K-Means clustering with histograms in HSV color space”, IEEE Workshop on Multimedia Signal Processing, pp. 322 – 325, 2008.
[3] S. Li and G. Guo, “The application of improved HSV color space model in image processing”, International Conference on Future Computer and Communication, vol. 2, pp. 10 – 13, 2010.
[4] W. Chen, T. Qu, Y. Zhou, K Weng, G. Wang and G. Fu, “Door recognition and deep learning algorithm for visual based robot navigation”, 2014 IEEE International Conference on Robotics and Biomimetics, pp. 1793 – 1798 , 2014.
[5] D. Luo, F. Hu, W. Liu and X. Wu, “Robot learns the concept of direction through motion activity”, 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, pp. 87 – 94, 2016.
[6] S. Contreras and F. D. L. Rosa, “Using deep learning for exploration and recognition of objects based on images”, 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium, pp. 1 – 6, 2016.
[7] K. Noda, H. Arie, Y. Suga and T. Ogata, “Multimodal integration learning of robot behavior using deep neural networks”, Robotics and Autonomous Systems, pp. 721 – 736, 2014.
[8] D. N. T. How, K. S. M. Sahari, H. Yuhuang and L. C. Kiong, “Multiple sequence behavior recognition on humanoid robot using long short-term memory”, Robotics and Manufacturing Automation, pp. 109 – 114, 2014.
[9] M. Alam, L. Vidyaratne, T. Wash and K. M. Iftekharuddin, “Deep SRN for robust object recognition: a case study with NAO humanoid robot”, SoutheastCon, pp. 1 – 17, 2016.
[10] J. Hwang and J. Tani, “Seamless integration and coordination of cognitive skills in humanoid robots: a deep learning approach”, IEEE Transactions on Cognitive and Developmental Systems, pp. 1 – 1, 2017.
[11] H. F. M. Zaki, F. Shafait and A. Mian, “Learninga deeply supervised multi-modal RGB-D embedding for semantic scene and object category recognition”, Robotics and Autonomous Systems, vol. 92, pp. 1 – 52, 2017.
[12] 李庚諺,「影像定位與同步建圖技術於導航系統之應用」,碩士學位論文,國立成功大學,民國101年。
[13] 林姿吟,「基於立體視覺的3D影像定位」,碩士學位論文,國立臺灣科技大學,民國100年。
[14] 邱治華,「結合尺度不變特徵與貝氏定理的影像定位與導航系統設計」,碩士學位論文,朝陽科技大學,民國97年。
[15] S. Thompson and S. Kagami, “Humanoid robot localisation using stereo vision”, IEEE-RAS International Conference on Humanoid Robots, pp. 19 – 25, 2005.
[16] I. Awaludin, P. Hidayatullah, J. Hutahaean and D. G. Parta, “Detection and object position measurement using computer vision on humanoid soccer”, Computer Engineering and Informatics Department, Bandung State Polytechnic, Bandung, Indonesia, pp. 88 – 92, 2013.
[17] M. N. Sudin, M. F. Nasrudin and S. N. H. S. Abdullah, “Humanoid localisation in a robot soccer competition using a single camera”, IEEE International Colloquium on Signal Processing & its Applications, pp. 77 – 81, 2014.
[18] H. Minakata, Y. Hayashibara, K. Ichizawa, T. Horiuchi, M. Fukuta, S. Fujita, H. Kaminaga, K. Irie and H. Sakamoto, “A method of single camera Robocup humanoid robot localization using cooperation with walking control”, IEEE International Workshop on Advanced Motion Control, pp. 50 – 55, 2008.
[19] J. S. Chiang, C. H. Hsia, S. H. Chang, W. H. Chang, H. W. Hsu, Y. C. Tai, C. Y. Li and M. H. Ho, “An efficient object recognition and self-localization system for humanoid soccer robot”, SICE Annual Conference, pp. 2269 – 2278, 2010.
[20] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You only look once: Unified, real-time object detection”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779 – 788, 2016.
[21] 沈予平,「基於擴展型卡爾曼濾波器之中型雙足人形機器人足球影像追蹤與定位」,碩士學位論文,國立臺灣科技大學,民國104年。
[22] S. Torabian, S. H. Alipour, A. Mirzargar and M. Tavakkolian, “Improving the localization of humanoid soccer robots in specified fields: a neural network approach”, RSI/ISM International Conference on Robotics and Mechatronics, pp. 443 – 448, 2013.
[23] C. H. Kuo, H. C. Chou, S. W. Chi and Y. D. Lien, “Vision-based obstacle avoidance navigation with autonomous humanoid robots for structured competition problems”, International Journal of Humanoid Robotics, vol.10, no. 3, 2013.
[24] J. M. I. Zannatha, R. C. Lim´on, A. D. G. S´anchez, E. H. Castillo, L. E. F. Medina and F. J. K. L. Leyva, “Monocular visual self-localization for humanoid soccer robots”, Electrical Communications and Computers, pp. 100 – 107, 2011.
[25] W. Hong, C. Zhou and Y. Tian, “Robust Monte Carlo Localization for humanoid soccer robot”, IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 934 – 939, 2009.
[26] B. Tian, C. L. Ng and C. M. Chew, “Self-localization of humanoid robots with fish-eye lens in a soccer field”, IEEE Conference on Robotics, Automation and Mechatronics, pp. 522 – 527, 2010.
[27] S. J. Miller, “The method of least squares”, Mathematics Department Brown University Providence, pp. 1 – 7, 2006.

無法下載圖示 全文公開日期 2022/08/23 (校內網路)
全文公開日期 本全文未授權公開 (校外網路)
全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
QR CODE