簡易檢索 / 詳目顯示

研究生: 蔡爭岳
Cheng-Yueh Tsai
論文名稱: 以類神經網路及3D資料 建立靜態臺灣手勢辨識系統
Taiwan sign language recognition base on 3D data and neural networks
指導教授: 李永輝
Yung-Hui Lee
口試委員: 謝光進
Kong-King Shieh
紀佳芬
Chia-Fen Chi
王孔政
Kung-Jeng Wang
林久翔
none
許尚華
none
黃雪玲
Sheue-Ling Hwang
學位類別: 博士
Doctor
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2008
畢業學年度: 96
語文別: 英文
論文頁數: 99
中文關鍵詞: 類神經網路手勢辨識系統台灣手語VICON系統輸入裝置
外文關鍵詞: Neural network, hand gesture recognition system, Taiwan sign language, VICON system, input devices
相關次數: 點閱:331下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 中文摘要

    人與電腦的溝通介面(Human-Computer Interaction; HCI)的設計在現在我們對電腦依賴程度上來看,可以說己經愈來愈重要了。然而,到現在還有一些問題尚無法完全解決。在本研究中,我們提出了的一個新的手勢辨識系統(hand gestures recognition system; HGRS)。這個系統主要是結合VICON及倒傳遞類神經(back propagation neural network; BPNN)的方法,來進行手勢資料的收集和手勢辨識的實施。我們利用小型反光點貼於右手各手指指尖及CMC3的位置,再利用VICON系統的攝影機來進行手部各點空間位置收集。接著,利用收集到的資料,進行手部各姿勢資料的萃取及正規化的運算,取得特徵值。這些特徵值將做為訓練及測試手勢辨識系統的輸入值。我們先將資料分為二組,一組用來訓練我們的手勢辨識系統,另一組則做為測試組。收集的手勢為20個靜態常用的臺灣手語。我們總共收集了十位受試者,5位為男生,5位為女生。為了訓練及測試我們的系統,我們將其中5人指派為訓練組,另外5人指派為測試組。每位受試者必須將各個手勢重複操作15次。經過訓練和測試的結果,系統的正確辨識率如下:訓練組98.50%,測試組94.65%,平均值為96.58%。但是,系統的訓練時間為44.9分鐘。
    為了改善我們的系統,我們進行了二個實驗。首先,我們試著探討系統內部的參數與系統績效相關性。我們發現,隱藏層的神經元(neurons)及隱藏層的層數對系統表現有影響。當系統內神經元數目太多或太少時,都會使系統的辨識率下降。對於系統的表現結果來說,隱藏層是必要的(不能沒有隱藏層)。訓練系統的遞迴次數會影響到系統的一般性能力(general ability)。當遞迴次數太多時會造成系統的過度訓練降低其一般性能力。然而,太少的遞迴次數又會減少系統的正確辨識率。學習速率及動量兩個參數值的變化則會影響系統在訓練時的均方累積誤差量(RMSE)。特徵值的數目會影響系統的表現績效。其中,由手指尖間距得到的特徵值相較於手指尖與 CMC3間距得到的特徵值可能提供較多的手勢辨識的資訊量。最後,當訓練的樣本較大時,可能對系統的表現有幫助。
    為了驗證以上所提的參數是真正對系統的表現有明顯的影響,本研究將對參數進行變異數分析。分析結果發現,當我們以手勢的正確辨識率作為依變項時,神經元的數目、樣本特徵值的數目、神經元數目與隱藏層數的交互作用、隱藏層與學習率的交互作用都有統計上的顯著。此外,當以系統訓練時間作為依變項時,發現神經元的數目、學習率、樣本特徵值的數目、隱藏層數、神經元數目與隱藏層數的交互作用、神經元數目與樣本特徵值的數目的交互作用、隱藏層數與樣本特徵值的數目的交互作用,都有統計上的顯著。
    根據前面的二個研究結果,我們找到最佳的系統參數值。經過利用前面的資料來進行訓練和測試的結果。我們得到的系統正確辨識率如下:訓練組98.00%,測試組94.40%,平均正確辨識率為96.2%。而我們的系統訓練時間,也降到13.8 分鐘。


    Despite the increasing popularity of Human-Computer Interaction (HCI), important problems must be resolved. This study proposes a novel hand gesture recognition system (HGRS) combining the VICON and the back propagation neural network (BPNN). The VICON is used to collect hand gesture data. The reflective markers were placed on the fingertips and CMC3 to capture the location of them. The features were extracted and normalized from collected data. The features were fed to back propagation neural network to recognize. The hand gestures used to test the system were twenty static Taiwan Sign Language (TSL) gestures. Five subjects were male, and five were female. For training and testing, five subjects were assigned a training set, and the other five were assigned to test set. The subjects performed each gesture fifteen times during data collection. After training and testing, the performance of system was 98.50% (training set), 94.65% (test set) and 96.58% (average). However, the system training time was 44.9 minutes.
    Two studies were performed to improve system. The first study examined the relationships between system parameters and performance. Different numbers of hidden layer neurons and different numbers of layers were compared for effects on system performance. Too many or too few neurons reduced the recognition rate . Further, the hidden layer was needed for improving the system performance of system. The training epoch size affects the general ability of system. If the epoch size is too large, the system “overfits” the training set, and its general ability is impaired. However, an overly small epoch size would impair system recognition. The learning rate and system momentum affect the RMSE of the trained system. A higher learning rate and reduced momentum decrease RMSE. The number of features affects the recognition rate of the system. The distance between fingertips provides more information than the distance between fingertips and CMC3. Although the training set may be larger, it may increase the recognition rate of the system.
    In order to verify the parameters are or not effect on recognition rate of system, the second study was performed by ANOVA tests. This study revealed that the number of neurons, the number of features, the neuron-by-layer and the layer-by-learn rate significantly affect recognition rate. Further, the number of neurons, the learning rate, the number of features, the number of layers, the neuron-by-layer, the neuron-by-feature, and the layer-by-feature significantly affect training time.
    The findings of the above two studies were applied to improved the system. The improved system was trained and tested by twenty TSL. The performance of the improved system was 98.00% (training set), 94.40% (test set) and 96.2% (average), respectively. However, the training time of the improved system was 13.8 minutes.

    Contents 中文摘要 I Abstract III 誌  謝 V Contents VI List of Tables IX List of Figures XI Chapter 1 Introduction 1 1.1 Study motivation 1 1.2 Study objectives 1 1.3 Study framework 2 Chapter 2 Literature review 4 2.1 The taxonomy of hand gesture 4 2.2 Evolution of hand gesture recognition system 6 2.2.1 The review of glove-based 7 2.2.2 The review of opto-electronic motion capture 10 2.3 Introduction and application of neural network 18 2.4 Summary 19 Chapter 3 Data Collection of hand gestures 21 3.1 System design 21 3.2 Subjects and material 23 3.3 Processing 24 3.4 Data collected apparatus and layout 24 3.5 Features extraction and normalizing 26 3.6 Summary 27 Chapter 4 Build Hand Gesture Recognition System (HGRS) 28 4.1 Objective 28 4.2 Architecture 28 4.3 Performance indexes 29 4.4 Result and discussion 30 Chapter 5 Parameters affect performance of HGRS 34 5.1 Objective –Task I: Effects of interior parameters 34 5.2 Independent variables 34 5.3 Dependent variables 35 5.4 Result and discussion 35 5.4.1 Effect of number of neurons and layers in hidden layer 35 5.4.2 Effect of training epoch 42 5.4.3 Effect of learning rate and momentum 43 5.5 Summary 44 5.6 Objective –Task II: Effect of exterior parameters 45 5.7 Independent variables 45 5.8 Dependent variables 45 5.9 Result and discussion 45 5.9.1 Effect of the number of features 46 5.9.2 Effect of sample sizes 51 5.10 Summary 52 Chapter 6 The ANOVA of important parameters 54 6.1 Objective 54 6.2 Independent variables 54 6.3 Dependent variables 55 6.4 Result and discussion 55 6.4.1 Discussion of recognition rate 55 6.4.2 Discussion of training time 61 6.5 Summary 66 Chapter 7 Improvement of Hand Gesture Recognition System 68 7.1 Objective 68 7.2 Architecture 68 7.3 Performance 69 Chapter 8 Conclusion 72 8.1 Overview 72 8.2 Future works 74 Reference 76 List of Tables Table 2-1 Hand gesture recognition employed glove 9 Table 2-2 Researches about hand recognition system by signal camera 12 Table 2 -3 Research about recognizing hand by more cameras or special apparatus 16 Table 4-1 Parameters setting of system 30 Table 4-2 Detail recognition result at 250*250 neurons 31 Table 4-3 Compared result of recognition rate 33 Table 5-1 Neural network articulation 35 Table 5-2 Comparison of recognition rate between 2- and 1-layer 38 Table 5-3 Compare with the recognition rate of 3- and 1-layer 39 Table 5-4 Compare with the recognition rate of 3- and 2-layer 39 Table 5-5 Number of selected features 45 Table 5-6 Error matrix of 50*50*50 model after feeding five features 48 Table 5-7 Error matrix of 50*50*50 model after feeding ten features 49 Table 5-8 Error matrix of 50*50*50 model after feeding fifteen features 50 Table 6-1 Levels of ANOVA factors 54 Table 6-2 The ANOVA of average recognition rate 55 Table 6-3 The ANOVA of test recognition rate 56 Table 6-4 Duncan multiple range test of neuron 57 Table 6-5 Duncan multiple range test of feature 58 Table 6-6 The ANOVA of training time 62 Table 6-7 Duncan multiple range test for training time 63 Table 7-1 Improved values of parameters in HGRS 68 Table 7-2 Compare of this study and other researches 70 Table 7-3 Compare of test recognition results in 250*250 and improved models 71 List of Figures Figure 1-1 Study framework 3 Figure 3-1 System overview 22 Figure 3-2 Selected 20 hand gestures in TSL 23 Figure 3-3 VICON (left) and camera (right) 24 Figure 3-4 Location of palpable surface landmarks 25 Figure 3-5 Location of laboratory 25 Figure 3-6 Feature representations of hand gestures 26 Figure 4-1 Neural network architecture 28 Figure 4-2 Average of features value for “4” and “40” 32 Figure 4-3 Average of features value for “借” and “龍” 33 Figure 5-1 Recognition rate of one hidden layer 36 Figure 5-2 Recognition rate of two hidden layers 37 Figure 5-3 Recognition rate of three hidden layers 37 Figure 5-4 Sort all models by performance 40 Figure 5-5 Sort all models by training time 41 Figure 5-6 Convergence of the training RMSE of models 42 Figure 5-7 Effects of learning rate and momentum of training 100*100 model 43 Figure 5-8 Effect of features on model recognition rate 46 Figure 5-9 Effect of sample size on recognition rate 52 Figure 6-1 Effect of interaction “neurons-by- layers”: average recognition rate 58 Figure 6-2 Effect of interaction “neurons-by- layers”: test recognition rate 59 Figure 6-3 Effect of interaction “learn rate-by-layers”: average recognition rate 60 Figure 6-4 Effect of interaction “learn rate-by-layers”: test recognition rate 60 Figure 6-5 Effect of interaction factors “neurons-by-layers”: training time 64 Figure 6-6 Effect of interaction “features -by- neurons”: training time 65 Figure 6-7 Effect of interaction “features-by-layers”: training time 66 Figure 7-1 Architecture of our BPNN 69

    Reference

    1. Al-Jarrah, O., and, Halawani, A., 2001, Recognition of gesture in Arabic sign language using neruo-fuzzy system, Artificial intelligence, 133, pp. 117-138.
    2. Bowden, R., Windridge, D. Kadir, T., Zisserman, A. and Brady, M., 2004, A linguistic feature vector for the visual interpretation of sign language, Proceeding of 8th European congress computing vision, pp. 391-401.
    3. Bowden, R., Zisserman, A., Kadir, T., and Brady, M., 2003, Vision based interpretation of natural sign languages, Proceeding of international congress computing vision system, pp.391-401.
    4. Braido, P., and Zhang, X., 2004, Quantitative analysis of finger motion coordination in hand manipulative and gestic acts, Human Movement Science, 22, 661-678.
    5. Bui, D., and Nguyen, L. T., 2007, Recognizing postures in Vietnamese sign language with MEMS accelerometers, IEEE sensors journal, 7, no. 5, pp. 707-712.
    6. Chegini, G. R., Khazaei, J., Ghobadian, B., and, Goudarzi, A. M., 2008, Prediction of process and product parameters in an orange juice spray dryer using artificial neural networks, Journal of food engineering, 84, pp. 534-543.
    7. Chen, F.-S., Fu, C.-M., and, Huang, C.-L., 2003, Hand gesture recognition using a real-time tracking method and hidden Markov models, Image and vision computing, 21, pp. 745-758.
    8. Chen, W.-C., and Hsu, S.-W., 2007, A neural-network approach for an automatic LED inspection system, Expert systems with applications, 33, pp. 531-537.
    9. Cheng, C.-S., and Tsang, C.-A., 1995, Neural network in detection the change of process mean value and variance, Journal of the chinese institute of industrial engineers, 12, no. 3, 215-223.
    10. Cho, M.-C., Park, K.-H., Hong, S.-H., and Jeon, J. W., 2002, A pair of Braille-Based Chord Gloves, Proceedings of the 6th international symposium on wearable computers, pp.154-155.
    11. Cui, Y., 2000, Appearance-based hand sign recognition from intensity image sequences, Computer vision and image understanding, 78, pp. 157-176.
    12. Davis, J., and Shah, M., 1994, Visual gesture recognition, Vision, image and signal procession, 141, pp. 101-106.
    13. Derpanis, K., 2004, A review of vision-based hand gestures, Reporter Department of computer science York University, pp 3-5.
    14. El-Sawah, A., Joslin, C., Georgans, N. D., and Petriu, E. M., 2007, A framework for 3D tracking and gesture recognition using element of genetic programming, 4th candian conference on computer and robot vision, pp.495-502, 28-30, May.
    15. Fels, S. S., and Hinton, G., 1998, Glove talk: a neural network interface between a data glove and a speech a speech synthesizer, IEEE transaction on neural networks, 4, pp. 2-8.
    16. Gavrla, D., 1999, The visual analysis of human movement: a survey, Computer vision image understanding, 73, no. 1, pp. 82-98.
    17. Goni, S. M., Oddone, S., Segura, J. A., Mascheroni, R. H., and Salvadori, V. O., 2008, Prediction of food freezing and thawing times: Artificial neural networks and genetic algorithm approach, Journal of food engineering, 84, 164-178.
    18. Guldemir, H., and Senger, A., 2007, Online modulation recognition of analog communication signals using neural network, Expert systems with applications, 33, pp. 206-214.
    19. Hager-Ross, C. ,and Schieber, M. H., 2000, Quantifying the independence of human finger movements: comparisons of digits, hands, and movement sequences, Journal of neuroscience, 20, 8542-8550.
    20. Han, M., Cheng, L., and Meng, H., 2003, Application of four-layer neural network on information extraction, Neural networks, 16, pp. 547-553.
    21. Hasani, M., and Emami, F., 2008, Evaluation of feed-forward back propagation and radial basis function neural networks in simultaneous kinetic spectrophotometric determination of nitroaniline isomers, Talanta, 75, 116-126.
    22. Hasanuzzaman, M., Zhang, T., Ampornaramveth, V., Gotoda, H., Shirai, Y., and Ueno, H., 2007, Adaptive visual gesture recognition for human-robot interaction using a knowledge-based software platform, Robotics and autonomous system, 55, pp.643-657.
    23. Haykin, S., 1999, Neural networks: a comprehensive foundation, 2nd ed., New Jersey: Prentice Hall.
    24. Heideman, G., Bekel, H., Bax, I., and Saalbach, A., 2004, Hand gesture recognition: Self-organising maps as a graphical user interface for the partitioning of large training data sets, Proceedings of the 17th International Conference on pattern recognition, 4, pp. 487-490, 23-26 Aug.
    25. Holden, E.-J., Lee, G., and Owens, R, 2005, Automatic Recognition of Colloquial Australian Sign Language, IEEE Workshop on Motion and Video Computing, 2, pp. 183-188, Jan.
    26. Hou, T.-H., Su, C.-H., and Chang, H.-Z., 2008, Using neural and immune algorithms to find the improve parameters for an IC wire bounding process, Expert system with applications, 34, pp. 427-436.
    27. Hsieh, K. L., 2001, Process improvement in the presence of qualitative response by combining fuzzy sets and neural networks, Integrated manufacturing systems, 12, no.6, pp.449-462.
    28. Hsieh, K. L., 2006, Parameter optimization of a multi-response process for lead frame manufacturing by employing artificial neural networks, International journal of advanced manufacturing technology, 28, num. 5, pp.584-591.
    29. Hsieh, K.-L. & Lu, Y.-S., 2008, Model construction and parameter effect for TFT-LCD process based on yield analysis by using ANNs and stepwise regression, Expert system with applications, 34, pp. 717-724.
    30. Huang, C.-L., and, Huang, W.-Y., 1998, Sign language recognition using model-based tracking and a 3D Hopfield neural network, mMachine vision and applications, 10, pp. 2921-2937.
    31. Huang, C.-L., and Jeng, S.-H., 2000, A model-based hand gesture recognition system, Machine vision and applications, 12, pp. 243-258.
    32. Huang, C.-L., Chen, M.-C., and Wang, C.-J., 2007, Credit scoring with a data mining approach based on support vector machines, Expert system with applications, 33, pp.847-856.
    33. Hush, D. R., and Horne, B. G., 1993, Progress in supervised neural networks, IEEE signal processing magazine, 10(1), 8-39.
    34. Ikemoto, L. and Forsyth, D. A., 2004, Enriching a motion collection by transplanting limbs, Proceeding of the 2004 ACM SIGGRAPH/Euro-
    -graphics symposium on computer animation, pp. 99-108.
    35. Jang, H., Do, J.-H., Jung, J.-W., and Bien, Z. Z., 2005, Two-staged hand-posture recognition method for soft remocon system, System, man and cybernetics, IEEE International conference on vol. 1, pp. 572-569, 10-12, Oct.
    36. Imagawa, K., Matsuo, H., Tangiguchi, R., and Arita, D., 2000, Recognition of local features for camera-based sign language recognition system, Proceedings of the 15th international conference on pattern recognition. 4, pp. 849-853, 3-7, Sept.
    37. Joslin, C., El-Sawah, A., Chen, Q., and Georganas, N., 2005, Dynamic gesture recognition, Instrumentation and measurement technology Conference (IMTC), pp. 1706-1711, Ottawa, Canada, 17-19, May.
    38. Kang, S. P., and Katupitiya, J., 2004, A hand gesture controlled semi-autonomous wheelchair, Proceedings of 2004 IEEE/RSJ international conference on intelligent robots and systems, pp.3565-3570, Sep. 28- Oct. 2, Sendal, Japan.
    39. Kang, S. P., Rodnay, G., Tordon, M., and Katupitiya, J., 2003, A hand gesture based virtual interface for wheelchair control, Proceeding of the IEEE/ASME international conference on advanced intelligent mechatronics, pp.778-783.
    40. Kesin, C., Erkan, A., and Akarun, L., 2003, Real time hand tracking and 3D gesture recognition for interactive interfaces using HMM, ICANN/ICONIPP, 26-29, June, Istanbul, Turkey.
    41. Kendon, A., Human gesture, 1993, Tools language, and cognition in human evolution, Gibson, K. R., and Ingold, T., eds., Cambridge Univ. Press, pp 43-62.
    42. Kessler, G. D., Hodges, L. F., and Walker, N, 1995, Evaluation of the cyberglove as a whole hand input device, ACM transactions on computer-human interaction, 2, 4, 263-283.
    43. Ko, D. C., Kim, D. H., Kim, B. M. and Choi, J. C., 1998, Methodology by the artificial neural networks and Taguchi method, Journal of Materials Processing Technology, pp. 487-492.
    44. Kong, S. G., Heo, J., Abidi, B. R., Paik, J., and Abidi, M. A., 2002, Recent advances in visual and infrared face recognition- a review, Resources and evaluation, 3, pp.927-930.
    45. Kim, D. H., and Park, W. S., 2005, Neural network for design and reliability analysis of rubble mound breakwaters, Ocean engineering, 32, pp.1332-1349.
    46. Kurt, I., Ture, M., and Kurum, A. T., 2008, Comparing performance of logistic regression, classification and regression tree, and neural networks for predicting coronary artery disease, Expert system with applications, 34, pp. 366-374.
    47. Lee, C. S., Ghyme, S. W., Park, C., and, Wohn, K. Y., 1998, The control of avatar motion using hand gesture, Proceeding of the ACM symposium on virtual reality software and technology, pp. 59-65, Nov., Taipei, Taiwan.
    48. Lee, H.-K., and Kim, J.-H., 1998, Gesture spotting from continuous hand motion, Pattern recognition letters, 19, pp. 513-520.
    49. Lee, C., S., Lee, J., PARK, C., and Kim, D., 1997, Real-time gesture recognition for the control of avatar, Proceeding of VRSTAC’97, pp. 242-254, Nagoya, Japan.
    50. Li, C., and Prabhakaran, B., 2005, A similarity measure for motion stream segmentation and recognition, Proceedings of the 6th international workshop on multimedia data mining: mining integrated media and complex data MDM’05, pp. 89-94, Aug.
    51. Liang, R.-H., and Ouhyoung, M., 1998, A real-time continuous gesture recognition system for sign language, IEEE international conference on automatic face and gesture recognition, pp. 558-567, Japan.
    52. Licsar, A. and Sziranyi, T., 2005, User-adaptive hand gesture recognition system with interactive training, Image vision computing, 23, pp.1102-1114.
    53. Lin, J. Y., Wu, Y., and Huang, T. S., 2004, 3D model –based hand tracking using stochastic direct search method, proceeding of the 6th IEEE international conference on automatic face and gesture recognition, pp. 1-6.
    54. Lyons, M.J, Budynek, J. ,and Akamatsu, S., 1999, Automatic classification of single facial images, IEEE transactions on pattern analysis and machine intelligence, 21, no. 12, pp. 1357 – 1362, Dec.
    55. Ma, J., Gao, W., Wu, J., and Wang, C., 2000, A Continuous Chinese sign language recognition system, , Proceedings of the 4th IEEE international conference on automatic face and gesture recognition, pp. 428 – 433, 28-30, March.
    56. Min, B.-W., Yoon, H.-S., Soh, J, Ohashi, T., and Ejima, T., 1999, Gesture-based editing system for graphic primitives and alphanumeric characters, Engineering applications of artificial intelligence, 12, pp. 429-441.
    57. Mitra, S., Acharya, T., 2007, Gesture recognition: a survey, IEEE transactions on system, man, and cybernetics- part c: application and reviews, 37, no. 3, pp. 311-324.
    58. Munib, Q., Habeeb, M., Takruri, B., and AL-Malik, A., 2007, American sign language (ASL) recognition based on hough transform and neural networks, Expert system with applications, 32, pp. 24-37.
    59. Nam, Y. and Wohn, K., 1997, Recognition of hand gestures with 3D, nonlinear arm movement, Pattern recognition letters, 18, pp. 105-113.
    60. Ng, C. W., and Ranganath, S., 2002, Real-time gesture recognition system and application, Image and vision computing, 20, pp. 993-1007.
    61. O’Hagan, R. G., Zelinsky, A., and Rougeaux, S., 2002, Visual gesture interfaces for virtual environments, Interacting with computers, 14, pp. 231-250.
    62. Ong, S. C. W., and Ranganath S., 2005, Automatic sign language analysis: a survey and the future beyond lexical meaning, IEEE transactions on pattern analysis and machine intelligence, 27, no. 6, pp. 873-891, June.
    63. Ong, S. C. W., Ranganath, S., and Venkatesh, Y.V., 2006, Understanding gestures with systematic variations in movement dynamics, Pattern recognition, 39, pp. 1633-1648.
    64. Oz, C. and Leu, M. C., 2007, Linguistic properties based on American sign language isolated word recognition with artificial neural networks using a sensory glove and motion tracker, Neurocomputing, 70, pp. 2891-2901.
    65. Pantic, M. and Rothkranz, L. J. M., 2000, Automatic analysis of facial expressions: the state of the art, IEEE transaction pattern analysis machine intelligence, 22, no.12, pp.1424-1445.
    66. Patwardhan, K. S., and Roy, S. D., 2007, Hand gesture modeling and recognition involving changing shapes and trajectories, using a predictive eigentracker, Pattern recognition letters, 28, pp.329-334.
    67. Pavlovic, V. I., Sharma, R., and Hang, T. S., 1997, Visual interpretation of hand gestures for human-computer interaction: a review, IEEE transactions on pattern analysis and machine intelligence, vol. 19, no. 7, pp. 677-695, July.
    68. Pullen, K., and Bregler, C., 2002, Motion capture assisted animation: texturing and synthesis, ACM Transactions on Graphics 21, 3, pp.501-508, July. (Proceedings of the ACM SIGGRAPH 2002).
    69. Quek, F., 1994, Toward a vision-based hand gesture interface, Proceedings of the virtual reality software and technical conference, pp.17-29.
    70. Quek, F., 1995, Eyes in the interface, Image and vision computing, 13, no. 6 pp. 511-525.
    71. Ramamoorthy, A., Vaswani, N., Chaudhury, S., and Banerjee, S., 2003, Recognition of dynamic hand gestures, Pattern recognition, 36, pp.2069-2081.
    72. Rash, G. S., Belliappa, P. P., Wachowiak, M. P., Somia, N.N., and Gupta, A., 1999, A demonstration of the validity of a 3D video motion analysis method for measuring finger flexion and extension, Journal of Biomechanics, 32, pp. 1337-1341.
    73. Rosenberg, R., and Slater, M., 1999, The chording glove: a glove-based text input device, IEEE transactions on system, man, and cybernetics-part c: application and reviews, 29, no. 2, pp. 168-191, May.
    74. Sato, Y., Kobayashi, Y., and Koike, H., 2000, Fast tracking of hands and fingertips in infrared images for augmented desk interface, IEEE international conference on automatic face and gesture recognition, pp. 462-467.
    75. Somia, N. N., Rash, G. S., Wachowiak, M., and Cupta., A., 1998, The initial sequence of digit joint motion- a three dimensional motion analysis, Journal of hand surgery, 23B, 792-795.
    76. Shan, C., Tan, T., and Wei, Y., 2007, Real-time hand tracking using a mean shift embedded particle filter, Pattern recognition, 40, pp.1958-1970.
    77. Starner, T., and Pentland, A., 1995, Real- time American sign language recognition from video using hidden Markov model, MIT media Lab., MIT, Cambridge, MA, Technology Reporter tr-375.
    78. Starner, T., Ausier J. Ashbrook, D., and Gandy, M., 2000, The gesture pendant: a self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring, International symposium on wearable computing, pp. 87-94, 16-17, Oct, Atlanta, GA, USA.
    79. Sengur, A., and Guldemir, H., 2003, Performance comparison of automatic analog modulation classifiers, 3th international advanced technologies symposium, pp. 461-464.
    80. Stenger, B., Thayananthan, A., Torr, P., and H. S., Cipolla, R., 2007, Estimating 3D hand pose using hierarchical multi-label classification, Image vision computing, 25, no. 12, pp. 1885-1894.
    81. Sturman, Z. and Zeltzer, D., 1994, A survey of glove-based input, IEEE computer graphics and applications, pp. 30-39, Jan.
    82. Tang, J.-Z., and Wang, Q.-F., 2008, Online fault diagnosis and prevention expert system for dredgers, Expert with applications, 34, pp. 511-521.
    83. Taylor, C. J., 2000, Reconstruction of articulated objects from point correspondences in a single image, Computer vision and image understanding, 80, no. 3, 349-363.
    84. Tominaga, M., Hongo, H., Koshimizu, H., Niwa, Y., and Yamamoto, K., 2002, Estimation of human motion from multiple camera for gesture recognition, Proceeding 16th international conference on pattern recognition, 1, pp. 401-404, 11-15, Aug.
    85. Tou, J. T., and Gonzalez, R. C., 1974, Pattern recognition principles, London, U. K.: Addision- Wesley.
    86. Tsai, C.-F., and Wu, J.-W., 2008, Using neural network ensembles for bankruptcy prediction and credit scoring, Expert system with applications, 34, 2639-2649.
    87. Utsumi, A., and Ohya, J., 1999, Multiple-hand-gesture tracking using multiple cameras, IEEE conference on computer vision and pattern recognition, 1, pp. 473-478.
    88. Varona, J., Buades, J., M., and Perales, F., J., 2005, Hand and face tracking for VR applications, Computers & graphics, 29, pp.179-187.
    89. Wachs, J. P., Stern, H., and Edan, Y., 2005, Cluster labeling and parameter estimation for the automated step of hand gesture recognition system, IEEE transaction on systems, man, and cybernetics-part A: system and humans, 35, no. 6, pp.932-944.
    90. Wang, L, Hu, W., and Tan, T., 2003, Recent developments in human motion analysis, Pattern recognition, 36, pp. 583-601.
    91. Weaver, J. Starner, T. and Pentland, A., 1998, Real-time American sign language recognition using desk and wearable computer based video, IEEE transaction pattern analysis machine intelligence, 33, no. 12, pp. 1371-1378.
    92. Wieland, R., and, Mirschel, W., 2008, Adaptive fuzzy modeling versus artificial neural networks, Environmental modeling & software, 23, pp. 215-224.
    93. Wong, W.-T., and Hsu, S.-H., 2006, Application of SVM and ANN for image retrieval, European journal of operational research, 173, pp. 938-950.
    94. Ye, G., Corso, J., J., and Hager, G., D., 2004, Gesture recognition using 3D appearance and motion features, Proceedings of the IEEE computer society conference on computer vision and pattern recognition workshops, pp.160-160,
    95. Yoon, H. S., Jung, S., Bae, Y., J., and Yang, H., S., 2000, Hand gesture recognition using combined features of location, angle and velocity, Pattern recognition, 34, pp.1491-1501.
    96. Yu, S., Zhu, K., and Diao, F., 2008, A dynamic all parameters adaptive BP neural networks model and its application on oil reservoir prediction, Applied mathematics and computation, 195, pp. 66-75.
    97. Zhang, G. P.,2001, An investigation of neural networks for linear time-series forecasting, Computers & Operations research, 28, pp. 1183-1202.
    98. Zhang, W., Bai, C., Liu, G. D, 2007, Neural network modeling of ecosystems: a case study on cabbage growth system, Ecological modeling, 201, pp. 317-325.
    99. Zhang, X., Lee, S. W., and Braido, P., 2003, Determining finger segmental centers of rotation in flexion-extension based on surface marker measurement, Journal of biomechanics, 36, 1097-1102.
    100. Zimmerman, T. G. and Lanier, J., 1987, A hand gesture interface device, Conference on human factors in computing systems, pp.189-192.
    101. Zue, Y., and Xu, G., 2002, A real-time approach to the spotting, representation, and recognition of hand gestures for human-computer interaction, Computer vision and image understanding, 85, pp. 189-208.

    QR CODE