簡易檢索 / 詳目顯示

研究生: 吳柏均
Po-Chun Wu
論文名稱: 適應性人員追蹤系統開發及其應用
Development of Adaptive People Tracking System and Its Applications
指導教授: 郭重顯
Chung-Hsien Kuo
口試委員: 蘇順豐
Shun-Feng Su
劉益宏
Yi-Hung Liu
翁慶昌
Ching-Chang Wong
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 72
中文關鍵詞: 多人追蹤姿態識別強化式學習服務型機器人骨架辨識
外文關鍵詞: Multi-person tracking, Posture recognition, Reinforcement learning, Service robot, OpenPose
相關次數: 點閱:188下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 基於影像之多人跟踪是具有挑戰性的問題之一,多人追蹤可以應用於跟隨人員的移動服務機器人,例如:購物車搬運機器人、搬運貨物的搬運工機器人、送餐機器人和旅遊景點的引導機器人。許多研究已經開發人員跟隨技術,但此技術僅限使用於固定相機,並不能適應環境變化。因此,本研究提出使用 OpenPose技術和強化學習方法,在具有多個攝像機且擁擠的場所具備即時的人員跟隨技術,同時此方法能夠追蹤和區分不同人員。
    本研究採用OpenPose深度學習網路,取得人員的關節和骨骼,從其獲得特徵提取(頭部、衣服和褲子),並將其作為人員分數和定義其人員ID,同時會採用強化學習方法的系統學習,使結果可以更容易識別。本系統學習結果的輸出會應用於其他相機並做為人員的識別和人員的追隨,本研究採用卡爾曼濾波技術。實驗結果表明,透過本論文提出的特徵提取方式與識別訓練結果,在人員軌跡追蹤上,平均位置檢測誤差為 2.77 cm,標準偏差為 2.22。對於多人追蹤實驗,系統可以很好地追蹤四個目標人員。在使用仿人機器人和自動導引車的實驗中,可以精準地跟隨目標人員。本系統驗證了多台攝像機上的應用獲得了令人滿意的結果,且可以區分和追蹤每個目標人員的運動軌跡。


    Multi-person tracking using image processing is one of the challenging issues and can be applied to mobile service robots that follow humans, such as shopping cart carrying robots, porter robots for transporting goods, food delivery robots, and guiding robots at tourist attractions. Many studies have developed the human follower technique, but they are only limited to a camera and are not yet adaptive to environmental changes. Therefore, this study proposes the development of a real-time human follower technique in crowded places with multiple cameras using the OpenPose technique and reinforcement learning method. This study was developed to be able to track and distinguish the movements of a person being followed by others around him.
    In this study, OpenPose was used to detect the human body through the joints and skeleton of the target. From the detection results obtained feature extraction (head, shirt, and pants) which will then be used as personal identification or ID. The identification results are used for system learning that adopts the reinforcement learning method. The output of the learning outcomes of this system is used as a reference for human identification and human tracking by other cameras. To obtain smoother movement tracking results, this study uses the Kalman Filter technique. The experimental results show a good system performance where the average position detection error is 2.77 cm with a standard deviation of 2.22. For multi-person tracking experiments, the system can track the four targeted subjects well. In experiments using a humanoid robot and automatic guided vehicle, both of them can follow the movement of the target person. While testing the system on multiple cameras obtained satisfactory results because the system can distinguish and track the movement of each person involved.

    指導教授推薦書 i 口試委員會審定書 ii 誌謝 iii 摘要 iv Abstract v List of Tables viii List of Figures ix Nomenclature xii Chapter 1 Introduction Introduction 1 1.1 Motivation and Purpose 1 1.2 Literature Review 4 1.2.1 Research on Human Posture 4 1.2.2 Research on Feature Identification 5 1.2.3 Related Research on Cross-border Tracking 6 1.2.4 Service robots follow suit 7 1.1 Organization of the Thesis 9 Chapter 2 System Architecture and Operation 10 2.1 System Organization 10 2.1.1 Vision Device 11 2.1.2 Camera Settings 12 2.2 Hardware Experiment Platform 15 2.2.1 Mobile Robot 15 2.2.2 Humanoid Robot 18 Chapter 3 Person Identification System Design and Tracking Application 20 3.1 Human Body Posture Recognition 20 3.1.1 VGG-19 Deep Neural Network 21 3.1.2 Part Confidence Maps(PCM) 22 3.1.3 Part Affinity Fields(PAF) 22 3.2 Person Identification Detection System 23 3.2.1 Initialize and Extract the Person Characteristic Value 24 3.2.2 Color Space Conversion 25 3.2.3 Dilation Operation and Erosion Operation 25 3.2.4 Connected Component Labeling 27 3.2.5 Hair Style Extraction 28 3.2.6 Clothing Feature Extraction 29 3.2.7 Deep Feature Extraction 32 3.3 Store in Temporary Memory 33 3.4 Reforcement Learning - Get Best Weight 34 3.4.1 Define the State Space 37 3.4.2 Define Action Space 38 3.4.3 Define the Reward Function 38 3.4.4 Update the Reinforcement Learning Algorithm 38 3.5 Coordinate Stability - Kalman Filter 40 Chapter 4 Experiments and Results 43 4.1 Person Positioning Accuracy Experiment 43 4.2 Multi-person Tracking (Multi-person Interleaving) Person Feature Value Identification Experiment: 47 4.3 Dynamic Tracking of Human Follower Robot 49 4.3.1 Humanoid Robots Follower 49 4.3.2 Mobile Robot Follower 51 4.4 Cross-border Tracking 54 Chapter 5 Conclusions and Future Works 57 References 58

    [1] Y. Shigeki, F. Okura, I. Mitsugami, and Y. Yagi, “Estimating 3D human shape under clothing from a single RGB image,” IPSJ Transactions on Computer Vision and Applications, vol. 10, no. 1, pp. 1-6, Dec. 2018.
    [2] Z. Cao, G. Hidalgo, T. Simon, S. E. Wei, and Y. Sheikh, “OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172–186, Jan. 2021.
    [3] L. Pishchulin, E.Insafutdinov1, S. Tang1, B. Andres1,M. Andriluka1, P. Gehler, and B. Schiele1, “DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4929-4937, 2016.
    [4] E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, and B. Schiele, “DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model,” May 2016.
    [5] H. Fang, S. Xie, Y. Tai, and C. Lu, “RMPE: Regional Multi-person Pose Estimation,” IEEE International Conference on Computer Vision (ICCV), pp. 2353-2362, 2017.
    [6] J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Pose-Guided Feature Alignment for Occluded Person Re-Identification.” , 2019.
    [7] J. Yang, X. Peng, K. Wang, and Y. Qiao, “Deep recurrent multi-instance learning with spatio-temporal features for engagement intensity prediction,” ICMI Proceedings of the 2018 International Conference on Multimodal Interaction, pp. 594–598. , 2018.
    [8] Y. Chen, X. Zhu, and S. Gong, “Person Re-Identification by Deep Learning Multi-Scale Representations.” , 2017.
    [9] V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic Active Contours,” Kluwer Academic Publishers, 1997.
    [10] Y. Yacoob and L. S. Davis, “Detection and Analysis of Hair,” IEEE Transactions, vol. 28, no. 7, pp. 1164–1169, Jul. 2006.
    [11] P. Julian, C. Dehais, F. Lauze, V. Charvillat, A. Bartoli, and A. Choukroun, “Automatic hair detection in the wild,” in Proceedings - International Conference on Pattern Recognition, pp. 4617–4620, 2010.
    [12] S. Gong, M. Cristani, S. Yan, C. Change, and L. Editors, “Advances in Computer Vision and Pattern Recognition Person Re-Identification,” [Online]. Available: http://www.springer.com/series/4205, 2018.
    [13] M. Jones and S. Rambhatla, “Body part alignment and temporal attention for video-based person re-identification,” BMVC, 2019.
    [14] C. Su, J. Li, S. Zhang, J. Xing, W. Gao and Q. Tian, “Pose-Driven Deep Convolutional Model for Person Re-identification,” IEEE International Conference on Computer Vision (ICCV), pp. 3980-3989, 2017.
    [15] H. Zhao, M. Tian, S. Sun, J. Shao, J. Yan, S. Yi, X. Wang, and X. Tang. “Spindle Net: Person Re-identification with Human Body Region Guided Feature Decomposition and Fusion,” In CVPR, pp. 1077-1085, 2017.
    [16] F. M. Khan and F. Bremond, “Multi-shot person re-identification using part appearance mixture,” IEEE Winter Conference on Applications of Computer Vision, WACV 2017, pp. 605–614, 2017.
    [17] S. Jia, R. Zang, X. Li, X. Zhang, and M. Li, “Monocular Robot Tracking Scheme Based on Fully-Convolutional Siamese Networks,” Chinese Automation Congress, CAC, pp. 2616–2620, Jan. 2019.
    [18] I. K. E. Purnama, M. A. Pradana and Muhtadin, "Implementation of Object Following Method on Robot Service," International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM), pp. 172-175, 2018.
    [19] L. Li, S. Yan, X. Yu, Y. K. Tan, and H. Li, “Robust multiperson detection and tracking for mobile service and social robots,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 5, pp. 1398–1412, 2012.
    [20] M. Gupta, S. Kumar, L. Behera, and V. K. Subramanian, “A Novel Vision-Based Tracking Algorithm for a Human-Following Mobile Robot,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1415–1427 , Jul. 2017.
    [21] N. Hirose, R. Tajima, and K. Sukigara, “Personal robot assisting transportation to support active human life - Human-following method based on model predictive control for adjacency without collision,” IEEE International Conference on Mechatronics, ICM 2015, pp. 76–81, Apr. 2015.

    無法下載圖示 全文公開日期 2024/07/10 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE