簡易檢索 / 詳目顯示

研究生: 李晁政
Chao-Cheng Li
論文名稱: 基於Kinect影像之特定人員跟隨移動機器人
Kinect-based human-following mobile robot
指導教授: 施慶隆
Ching-Long Shih
口試委員: 黃騰毅
Teng-Yi Huang
陳雅淑
Ya-Shu Chen
李文猶
Wen-Yo Lee
施慶隆
Ching-Long Shih
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 68
中文關鍵詞: Kinect移動機器人雷射測距儀避障人員跟隨PI控制器
外文關鍵詞: Kinect, mobile robot, LRF, obstacle avoidance, human-following, PI controller
相關次數: 點閱:267下載:16
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本論文旨在應用 Kinect 影像攝影機實現人員跟隨之移動機器人,使移動機器人能夠於室內環境跟隨特定的人員。首先,利用 Kinect 找出人體的骨架,接著對比出預設啟動手勢之操作員進行上衣顏色訊息之紀錄。而移動機器人會根據操作人員的上衣顏色訊息進行影像辨識,並將此顏色設為跟隨目標。然後以 Kinect 深度資訊及相對於相機之三維座標系統,計算與目標之相對距離及方向角度,並據此計算機器人移動命令,並整合雷射測距儀以實現避障功能。最後,經由閉迴路 PI 速度控制器,控制移動機器人自動地跟隨特定操作人員。


This thesis aims to an implement human-following robot equipped with Kinect sensor, so that the mobile robot can follow the specific person in the indoor environment. Kinect sensor was used to detect the skeleton of the human body. The operator was defined by the one who started with a default gestures, and the color of its upper outer garment was recorded. The color of the upper outer garment was defined as the following target, so that the robot would follow the one who dressed in the target color. And the relative distance and direction of angle to the target were calculated by the depth information and Kinect camera’s Cartesian coordinate. And an integrating LRF was used to avoid the obstacle from surroundings. In the experimental result, the mobile robot which controlled by closed loop PI controller could follow the target person automatically.

摘要 I Abstract II 致謝 III 目錄 IV 圖目錄 VII 表目錄 X 第一章 緒論 1 1.1 研究目的與動機 1 1.2 文獻回顧 1 1.3 論文大綱 2 第二章 Kinect 影像處理 3 2.1 Kinect簡介 3 2.2 人體搜尋策略 5 2.2.1 人體骨架搜尋 5 2.2.2 前置作業 7 2.2.3 ROI區域取得 9 2.2.4 ROI區域應用 10 2.3 角度產生模組 12 第三章 移動機器人運動控制及避障 15 3.1 移動機器人系統架構 15 3.1.1 移動機器人 15 3.1.2 雷射測距儀 17 3.2 移動機器人數學模型 19 3.3 雷射測距儀避障策略 21 3.3.1 雷射測距儀命令收發模組 21 3.3.2 避障策略 23 3.4 運動控制模組 27 3.4.1 影像速度模組 28 3.4.2 雷射速度模組 30 3.4.3 PI速度控制器 31 第四章 視覺里程計 33 4.1 相機模型及內部外部參數矩陣 33 4.2 視覺里程計 36 4.3 基礎矩陣應用 38 4.3.1 極線幾何 39 4.3.2 基礎矩陣 40 4.3.3 基礎矩陣拆解 42 第五章 實驗結果與討論 44 5.1 HSV直方圖比較實驗 44 5.2 雷射測距儀測距實驗 47 5.3 移動機器人跟隨人員目標實驗 50 5.4 相機位移量測實驗 57 5.5 相機里程計實驗 62 5.6 論文比較 63 第六章 結論與建議 64 6.1 結論 64 6.2 建議 65 參考文獻 66

[1] 施慶隆、李文猶,機電整合與控制—多軸運動設計與應用,第三版,全華書局股 份有限公司,2015。
[2] J. Shotton et al., "Real-time human pose recognition in parts from single depth images," CVPR 2011, Providence, RI, 2011, pp. 1297-1304.
[3] Hartley, R. and Zisserman, A. multiple view geometry in computer vision, Cambridge University Press, 2000.
[4] Microsft developer network , Joint types in a skeleton. https://msdn.microsoft.com/en- us/library/microsoft.kinect.jointtype.aspx
[5] E. Rosten, R. Porter and T. Drummond, "Faster and better: a machine learning approach to corner detection," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 105-119, Jan. 2010.
[6] Edward Rosten and Tom Drummond, "Machine learning for high-speed corner detection." https://www.edwardrosten.com/work/rosten_2006_machine.pdf
[7] K. Shimura, Y. Ando, T. Yoshimi and M. Mizukawa, "Research on person following system based on RGB-D features by autonomous robot with multi-Kinect sensor," 2014 IEEE/SICE International Symposium on System Integration, Tokyo, 2014, pp. 304-309.
[8] A. Jevtić, G. Doisy, Y. Parmet and Y. Edan, "Comparison of interaction modalities for mobile indoor robot guidance: direct physical interaction, person following, and pointing control," in IEEE Transactions on Human-Machine Systems, vol. 45, no. 6, pp. 653-663, Dec. 2015.
[9] N. Bellotto and H. Hu, "Multisensor-based human detection and tracking for mobile service robots," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 1, pp. 167-181, Feb. 2009.
[10] H. Cho and W. Chung, "Preliminary research on robust leg-tracking indoor mobile robots by combining the Kinect and the laser range finder information," 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Goyang, 2015, pp. 462-464.
[11] M. Kristou, A. Ohya and S. Yuta, "Target person identification and following based on omnidirectional camera and LRF data fusion," 2011 RO-MAN, Atlanta, GA, 2011, pp. 419-424.
[12] Matthew Tang,”Recognizing hand gestures with Microsoft’s Kinect”, Department of Electrical Engineering Stanford University.
[13] S. Sugiyama and T. Wada, "Improved personal identification method for guide robots using dress color information via KINECT," 2013 International Conference on Soft Computing and Pattern Recognition (SoCPaR), Hanoi, 2013, pp. 111-116.
[14] G. Xing, S. Tian, H. Sun, W. Liu and H. Liu, "People-following system design for mobile robots using kinect sensor," 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, 2013, pp. 3190-3194.
[15] R. Tubman and K. M. Arif, "Efficient people search and gesture recognition using a Kinect interfaced mobile robot," 2016 IEEE 14th International Workshop on Advanced Motion Control (AMC), Auckland, 2016, pp. 220-226.
[16] X. Li, S. Li, S. Jia and C. Xu, "Mobile robot map building based on laser ranging and Kinect," 2016 IEEE International Conference on Information and Automation (ICIA), Ningbo, 2016, pp. 819-824.
[17] Yoonchang Sung and Woojin Chung, "Human tracking of a mobile robot with an onboard LRF (Laser Range Finder) using human walking motion analysis," 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Incheon, 2011, pp. 366-370.
[18] Michael Fleder, Sudeep Pillai, Jeremy Scott, “3D object tracking using the Kinect” , MIT CSAIL, 6.870
[19] L. Garrote, J. Rosa, J. Paulo, C. Premebida, P. Peixoto and U. J. Nunes, "3D point cloud downsampling for 2D indoor scene modelling in mobile robotics," 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Coimbra, Portugal, 2017, pp. 228-233.
[20] M. T. Sqalli et al., "Improvement of a tele-presence robot autonomous navigation Using SLAM algorithm," 2016 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, 2016, pp. 1-7.

QR CODE