簡易檢索 / 詳目顯示

研究生: 吳治宏
Chih-Hung Wu
論文名稱: 基於深度之家電用手勢控制
Depth-Based Hand Gesture Recognition for Home Appliance Control
指導教授: 林昌鴻
Chang Hong Lin
口試委員: 阮聖彰
Shanq-Jang Ruan
李佳翰
Chia-han Lee
沈毅偉
Yi-Wei Shen
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 英文
論文頁數: 71
中文關鍵詞: 深度攝影機靜態手勢辨識動態手勢辨識支持向量機家電控制
外文關鍵詞: Depth Cameras, Static Hand Posture Recognition, Dynamic Hand Gesture Recognition, Support Vector Machine, Home Appliance Control
相關次數: 點閱:269下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 使用手勢來與人或機器溝通是一種自然與直覺的方式,手勢偵測在電腦視覺領域有著一段歷史。自從低價深度攝影機的推出,深度攝影機變成一般人更能負擔的起的消費型電子產品。使用深度攝影機對於家電控制建立了一個新的機會。在這論文中,我們提出了一個為了家電控制的動態手勢辨識系統,只使用深度資訊。我們提出的系統包含三個主要步驟:前處理、靜態手勢辨識和動態手勢辨識。首先我們使用背景去除(Background subtraction)去排除非主要使用者所產生的手勢,之後去偵測和追蹤使用者的手。第二步驟,使用一個自適性的正方形去抓取手部的區域,一但得到手部的區域,在此區域去抓取手部的特徵,然後使用支持向量機(Support Vector Machine)去對靜態手勢做分類。最後,使用不同的方法來偵測九個家電控制常用的動態手勢。在實驗裡,不同手勢被用來評估靜態手勢分類,兩個人在四個不同位置和兩個深度被用來驗證動態手勢偵測的效能。實驗結果顯示所提出的系統可以正確地偵測動態手勢,可應用在家電控制。


    The hand gesture recognition has a long history within the computer vision community and is one of natural and intuitional way to communicate with human and machine. Since low-cost depth cameras have been launched, depth cameras become more and more affordable in consumer electronics. Using Low-cost depth cameras create a new opportunity for the home appliance control. In this thesis, we proposed a dynamic gesture recognition system that uses only depth information for home appliance control. Our proposed system consists of three main components: preprocessing, static posture recognition and dynamic gesture recognition. In the first component, we use background subtraction to exclude invalid gestures that is not generated by the main user, and then to detect and track the hand. Second, we extract the region of hand using an adaptive square. Once the region of hand is obtained, the features of hand are extracted and the static hand posture are classified using support vector machine (SVM). Finally, nine commonly used dynamic hand gestures for home appliance control can be detected using different methods. In the experiment, the static hand posture classification was evaluated in different postures and the performance of dynamic gesture recognition is verified by two different persons at 4 different positions with 2 different depths. The experimental results show that the proposed system can accurately detect the dynamic hand gestures for home appliance control.

    中文摘要……………………………………………………………………………… i Abstract………………………………………………………………………………. ii 致謝…………………………………………………………………………………...iii List of Contents…………………...…………………………………………………..iv List of Figures…………………....…………………………………………………....v List of Tables………….......…………..……………………………………………. viii 1 Introduction………………………………………………………………………….1 1.1 Motivation……………………………………………………………………1 1.2 Contributions………………………………………………………...……….2 1.3 Thesis Organization…………………..........……………………………….2 2 Related Works……………………………………………………………………….3 2.1 RGB cameras……………………….………………………………………...3 2.2 Depth cameras……..…………………………………………………………6 3 Proposed Methods………………………………………………………………….9 3.1 Preprocessing………………………………………………………………...9 3.2 Static gesture recognition………………………………....………………...11 3.2.1 Extract region of hand……………………….......…………………..11 3.2.2 Hand feature extraction…………………………......……………….13 3.2.3 Static hand posture classification............……………………………16 3.2.4 Library for Support Vector Machine………………………………...21 3.3 Dynamic gesture recognition……………………………….……………….24 3.3.1 Static hand posture based methods……………………….....……….24 3.3.2 Hand trajectory based methods.......…………………………………28 3.3.3 Fingertip based methods..............……………………………………32 4 Experimental Results……………………………………………………………….36 4.1 Developing Platform………………………………………………………..36 4.2 Experiment results of static hand posture classification…………….......….37 4.3 Experiment results of dynamic gesture recognition....................…………...46 4.4 Analysis of Proposed Method.............................................................……...54 4.4.1 Static hand posture classification analysis....................….....……….54 4.4.2 Dynamic gesture recognition analysis.....................…………………63 4.4.3 Proposed system analysis........................................…………………66 5 Conclusions and Future Works……………………………………………………..67 References……………………………………………………………………………69

    [1] J. P. Wachs, M. Kolsch, H. Stern, and Y. Edan. Vision-Based Hand-Gesture Applications. Communications of the ACM, vol. 54, , no. 2, pp. 60 -71, 2011
    [2] Microsoft Corporation. Kinect for Windows SDK. http://www.engadget.com/2010/11/04/kinect-for-xbox-360-review [Online].
    [3] Prime Sense. Prime sense Xtion’s hardware. http://www.primesense.com, 2010. [Online; accessed 22-March-2013].
    [4] Y. Liu, Y. Yin, and S. Zhang. Hand Gesture Recognition Based on HU Moments in Interaction of Virtual Reality. 4th International Conference on Intelligent Human-Machine Systems and Cybernetics, 2012.
    [5] M. Panwar. Hand Gesture Recognition based on Shape Parameters. International Conference on Computing, Communication and Applications (ICCCA), 2012.
    [6] J. Choi, H.n Park, and J-Il Park. Hand Shape Recognition Using Distance Transform and Shape Decomposition. 18th IEEE International Conference on Image Processing (ICIP), 2011.
    [7] R. Senanayake, and S. Kumarawadu. A Robust Vision-based Hand Gesture Recognition System for Appliance Control In Smart Homes. IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC), 2012.
    [8] A. Barkoky, and N. M. Charkari. Static Hand Gesture Recognition of Persian Sign Numbers using Thinning Method. International Conference on Multimedia Technology (ICMT), 2011.
    [9] D-Y Huang, W-C Hu, and S-H Chang. Vision-based Hand Gesture Recognition Using PCA+Gabor Filters and SVM. Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2009.
    [10] R. Shrivastava. A Hidden Markov Model based Dynamic Hand Gesture Recognition System using OpenCV. IEEE 3rd International Advance Computing Conference (IACC), 2013.
    [11] Z. Yang, Y. Li, W. Chen, and Y. Zheng. Dynamic Hand Gesture Recognition Using Hidden Markov Models. The 7th International Conference on Computer Science & Education (ICCSE 2012), 2012.
    [12] M. Elmezain, A. Al-Hamadi, G. Krell, S. El-Etriby, and B. Michaelis. Gesture Recognition for Alphabets from Hand Motion Trajectory Using Hidden Markov Models. IEEE International Symposium on Signal Processing and Information Technology, 2007.
    [13] P. Kakumanu, S. Makrogiannis, and N. Bourbakis. A survey of skin-color modeling and detection methods. Pattern Recognition, vol. 40, Issue 3, pp. 1106–1122, 2007.
    [14] J. Suarez, and R. R. Murphy. Hand Gesture Recognition with Depth Images: A Review. The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012.
    [15] S. Oprisescu, C. Rasche, and B. Su. Automatic Static Hand Gesture Recognition Using TOF Cameras. Proceedings of the 20th European Signal Processing Conference (EUSIPCO), 2012.
    [16] C.-P. Chen, Y.-T. Chen, P.-H. Lee, Y.-P. Tsai, and S. Lei. Real-time Hand Tracking on Depth Images. IEEE Visual Communications and Image Processing (VCIP), 2011.
    [17] A. Kurakin, Z. Zhang, and Z. Liu. A Real Time System for Dynamic Hand Gesture Recognition With A Depth Sensor. Proceedings of the 20th European Signal Processing Conference (EUSIPCO), 2012.
    [18] Z. Ren, J. Yuan, and Z. Zhang. Robust Hand Gesture Recognition Based on Finger-Earth Mover’s Distance with a Commodity Depth Camera. Proceedings of the 19th ACM international conference on Multimedia, 2011.
    [19] P. Suryanarayan, A. Subramanian, and D. Mandalapu. Dynamic Hand Pose Recognition using Depth Data, 20th International Conference on Pattern Recognition (ICPR), 2010.
    [20] X. Zhu, and K. K. Wong. Single-Frame Hand Gesture Recognition Using Color and Depth Kernel Descriptors. 21st International Conference on Pattern Recognition (ICPR), 2012.
    [21] D. Minnen, and Z. Zafrulla. Towards Robust Cross-User Hand Tracking and Shape Recognition. IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011.
    [22] C. Keskin, F. Kirac, Y. E. Kara, and L. Akarun. Randomized Decision Forests for Static and Dynamic Hand Shape Classification. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012.
    [23] W. Li, Z. Zhang, and Z. Liu. Expandable data-driven graphical modeling of human actions based on salient postures. IEEE Trans. Circuits Syst. Video Techn., vol. 18, no. 11, pp. 1499–1510, 2008.
    [24] Prime Sense. "OpenNI User Guide," pp.26-29.
    [25] J. L. Raheja, A. Chaudhary, and K. Singal. Tracking of Fingertips and Centres of Palm using KINECT. Third International Conference on Computational Intelligence, Modelling & Simulation, 2011.
    [26] G. Bradski and A. Kaehler. Learning OpenCV: Computer Vision with the OpenCV Library, 1st ed., vol. 1, O’reilly Media, 2008.
    [27] D. Douglas and T. Peucker. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. The Canadian Cartographer, vol.10, No.2, pp.112–122, 1973.
    [28] Convex polygons. http://www.basic-mathematics.com/convex-polygons.html [Online; accessed 15-July-2013].
    [29] R. L. Graham. An efficient algorithm for determining the convex hull of a finite planar set. Information Processing Letters, 7:175–180, 1972.
    [30] Graham scan algorithm. http://en.wikipedia.org/wiki/Graham_scan [Online; accessed 15-July-2013].
    [31] K. Homma and E.-I. Takenaka. An image processing method for feature extraction of space-occupying lesions. Journal of Nuclear Medicine 26 (1985): 1472–1477.
    [32] D. Coomans and D.L. Massart. Alternative K-Nearest Neighbor Rules in Supervised Pattern Recognition : Part 1. K-Nearest Neighbor Classification by Using Alternative Voting Rules. Analytica Chimica Acta, vol. 136, pp. 15-27, 1982.
    [33] J. J. Hopfield. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proceedings of the National Academy of Sciences of the United States of America, vol. 79, pp. 2554 -2558, 1982.
    [34] J.R. Quinlan. Induction of Decision Trees. Machine Learning, vol. 1, pp. 81-106, 1986.
    [35] C. Cortes, and V. Vapnik. Support-Vector Network. Machine Learning, pp. 273-297, 1995.
    [36] C.C. Chang and C.J Lin. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, vol. 2, 27:1--27:27, 2011.

    無法下載圖示 全文公開日期 2018/07/30 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE