簡易檢索 / 詳目顯示

研究生: 陳仕昇
Shih-Sheng Chen
論文名稱: 基於人體骨架辨識之動作辨識系統
A Moving Action Recognition System Based on Human Skeletonization
指導教授: 阮聖彰
Shanq-Jang Ruan
口試委員: 林昌鴻
Chang-Hong Lin
姚智原
Chih-Yuan Yao
蔡坤霖
Kun-Lin Tsai
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 英文
論文頁數: 65
中文關鍵詞: 體感深度攝影機動作辨識支持向量機
外文關鍵詞: motion-sensing, dpeth camera, action recognition, support vector machine(SVM)
相關次數: 點閱:300下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

近年來,體感技術的發展愈來愈迅速,應用也愈來愈全面。不需傳統的控制器,人們可以在此技術獲得很多方便及做更多有趣的應用。在開發體感應用時,由於人的身形、骨架、結構皆有著全面性的不同,每一種動作控制都是繁雜的設計。在這篇論文,我們提出一種使用骨架資訊的動作辨識系統,透過深度攝影機結取人體骨架資訊,經由座標轉換後,針對所要辨識的十四種動作的區域空間做分類收集特徵序列,最後再利用支持向量機(Support Vector Machine)去對特徵序列做分類。實驗結果顯示所提出的系統可以正確的偵測十四種動作。


Nowadays, the applications of motion-sensing technology have been growing up quickly. With motion-sensing technology, human can do more convenience and interesting applications without traditional controller. In developing motion-sensing applications, a multifarious design for every kinds of action control. Besides, it is difficult to design an overall action recognition method due to the different skeleton structure between each human. In this thesis, an action recognition system is provided by using information of human skeletons obtained from RGB-D camera. The region space is classified and collected feature sequence for fourteen actions via the coordinate transformation. Finally, the support vector machine is used to classify the feature sequence. The experimental result shows that the proposed system can recognize fourteen actions correctly.

Recommendation Form Committee Form Chinese Abstract English Abstract Acknowledgements Table of Contents List of Tables List of Figures 1 Introduction 1.1 Introduction of human motion technology 1.2 Introduction of motion detector progression 1.3 Motivation of motion detector 1.4 Organization of this thesis 2 Related Works 2.1 A Review of Human Activity Analysis 2.1.1 Space-Time Approaches 2.1.2 Sequential Approaches 2.2 Support Vector Machine 3 Proposed Method 3.1 The Architecture of Proposed Method 3.1.1 The Data Modeling flow 3.1.2 The Posture recognition flow 3.1.3 Section Summary 3.2 Coordinate transform scheme 3.2.1 Coordinate transformation technique 3.3 Region Space Model 3.4 Extraction of Action Feature 4 Experimental Results 4.1 Developing platform 4.2 Analysis of region space and action feature 4.3 Experimental results of proposed method 5 Conclusion References Copyright Form

[1] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with microsoft
kinect sensor: A review," IEEE Trans. Cybern., vol. 43, no. , pp. 1318-1334, Oct.
2013.
[2] L. Xia, C.-C. Chen, and J. K. Aggarwal, “Human detection using depth information by Kinect," IEEE Conf. Comput. Vision Pattern Recognit. Workshops, pp. 15-22, June 2011.
[3] J. Han, E. J. Pauwels, P. M. de Zeeuw, and P. H. de With,“Employing a RGB-D sensor for real-time tracking of humans across multiple re-entries in a smart environment," IEEE Trans. Consumer Electron., vol. 58, no. , pp. 255-263, May 2012.
[4] X. Ren, L. Bo, and D. Fox, “RGB-(D) scene labeling: Features and algorithms," IEEE Conf. Comput. Vision Pattern Recognit., pp. 2759-2766, June 2012.
[5] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake,“Real-time human pose recognition in parts from single depth images," IEEE Conf. Comput. Vision Pattern Recognit., pp. 1297-1304, June 2011.
[6] W. Shen, K. Deng, X. Bai, T. Leyvand, B. Guo, and Z. Tu, “Exemplar-based human action pose correction and tagging," IEEE Conf. Comput. Vision Pattern Recognit., pp. 2759-2766, June 2012.
[7] L. Xia, C.-C. Chen, and J. K. Aggarwal, “View invariant human action recognition using histograms of 3D joints," IEEE Conf. Comput. Vision Pattern Recognit. Workshops, pp. 20-27, June 2012.
[8] G. Hackenberg, R. McCall, and W. Broll,“Lightweight palm and finger tracking for real-time 3D gesture control," IEEE Conf. Virtual Reality, pp. 19-26, March 2011.
[9] L. M. Paz, P. Pinies, J. D. Tardos, and J. Neira, “Large-Scale 6-DOF SLAM With Stereo-in-Hand," IEEE Trans. Robot., vol. 24, no. , pp. 946-957, Oct. 2008.
[10] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using
Kinect-style depth cameras for dense 3-D modeling of indoor environments," Int. J. Robot. Res., vol. 31, no. 5, pp. 647-663, 2012.
[11] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera," ACM Symp. User Interface Software Technol., pp. 559-568, 2011.
[12] V. H. Chandrashekhar, K. S. Venkatesh, “Action energy images for reliable human action recognition," Proc. Asian Symposium on Information Display, pp. 484-487, Oct. 2006.
[13] C.-H. Fang, J.-C. Chen, C.-C. Tseng, and J. Lien,“Human action recognition using spatio-temporal classification," Asian Conf. Comput. Vis., pp. 98-109, 2009.
[14] A.F. Bobick, and J.W. Davis,“The recognition of human movement using temporal templates," IEEE Trans. PAMI, vol. 23, no. 3, pp. 257-267, Mar, 2001.
[15] V. Megavannan, B. Agarwal, and R. Venkatesh Babu, “Human action recognition
using depth maps," International Conference on Signal Processing and Communications, pp. 1-5, July, 2012.
[16] B. Ay, and M. Karakose,“Motion Classification Approach Based on Biomechanical Analysis of Human Activities," IEEE Conf. Computational Intelligence and Computing Research, pp. 1-8, Dec. 2013.
[17] C. Hong, S. Xiao, Z. Tan, and J. Lv, “Real-Time Motion Recognition Based on
Skeleton Animation," International Congress on Image and Signal Processing, pp.
1648-1652, Oct., 2012.
[18] K.-H. Cheng, C.-H. Hsieh, and C.-C. Wang, “Human action recognition using 3D body joints," IPPR Conference on Computer Vision, Graphics and Image Processing, Aug., 2011.
[19] J. K. Aggarwal1, and M. S. Ryoo1, “Human activity analysis: A review," ACM
Computing Surveys, vol. 43, no. 3, pp. 1-43, April, 2011.

QR CODE