簡易檢索 / 詳目顯示

研究生: 林鈺山
Yu-Shan Lin
論文名稱: 聯結式局部特徵編碼之表情辨識
Expression Recognition using Cascade Local Deformation Code
指導教授: 徐繼聖
Gee-Sern Hsu
口試委員: 洪一平
Yi-Ping Hung
李明穗
Ming-Sui Lee
郭景明
Jing-Ming Guo
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 79
中文關鍵詞: 表情辨識Haar-like 特徵
外文關鍵詞: Expression Recognition
相關次數: 點閱:144下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究擷取並利用兩種不同局部區域的外貌特徵進行表情辨識。第一種為以人眼判斷所選擇出的局部區域(嘴角、法令紋、抬頭紋等)並配合監督式學習;第二種為以機器選取之Haar-like局部特徵配合半監督式學習。兩者皆以局部形變編碼(Local Deformation Code: LDC)進行辨識。局部形變編碼可分三個階段進行:1.訓練局部形變區域(Local Deformation Region: LDR)之偵測器,2.局部形變區域之偵測與LDC編碼,3.利用LDC之解碼進行表情辨識。本實驗採用CK+(Cohn-Kanade Extension)、JAFFE(Japanese Female Facial Expression)與FERA (Facial Expression Recognition and Analysis)資料庫做為實驗樣本,實驗結果證明在易於人眼辨識的資料庫(CK+、FERA)中上述兩種方法組合成之編碼有較佳的辨識效能,而本研究所提出的方法進行表情辨識系統的製作也有不錯的辨識效能。


    An appearance-based coding scheme, called Cascade Local Deformation Code (CLDC), is proposed for expression recognition. CLDC has two component codes, Human Observable Code (HOC) and Haar-like Feature Code (HFC). The HOC encodes the local deformation regions caused by facial muscle contractions observable to humans, and the HFC encodes the Haar-like features selected by an AdaBoost algorithm. Given a training set, one first selects the observable local deformation regions, and trains a HOC detector which encodes the local deformation regions into HOC codewords according to seven predefined expressions. The training set is also used for the extraction of Haar-like features and encoding of the features into HFC codewords for the seven expressions. The combination of HOC and HFC gives the CLDC, which is proven to outperform either component in the decoding phase for the expression recognition on disjoint testing sets. Experiments on the CK+, JAFFE and the latest FERA databases show that the performance of the CLDC is competitive to the state-of-the-art approaches.

    摘要 I ABSTRACT II 致謝 III 圖目錄 VI 表目錄 VIII 第一章 介紹 1 1.1 背景介紹與研究動機 1 1.2 方法概述 3 1.3 論文貢獻 5 1.4 論文架構 6 第二章 相關文獻簡介 7 2.1 動作單元與相關之表情辨識 7 2.2 幾何與外貌特徵之表情辨識 9 2.2.1 幾何特徵 9 2.2.2 外貌特徵 11 第三章 局部形變之編碼 13 3.1 以監督式學習訓練局部形變區域 13 3.1.1 以人眼判斷所選擇出的局部區域並配合監督式學習 13 3.1.2 以機器選取之Haar-like局部特徵配合半監督式學習 17 3.2 利用LDC之解碼進行表情辨識 18 3.3 利用LDC之解碼進行表情辨識 19 第四章 實驗設計與結果呈現 22 4.1 實驗樣本介紹 22 4.2 樣本規格 23 4.3 實驗設計與結果呈現 24 4.3.1 在CK+下之參數實驗結果 25 4.3.2 在JAFFE下表情辨識參數最佳化實驗 31 4.3.3 在FERA下表情辨識參數最佳化實驗 38 4.3.4 混合資料庫的測試 46 第五章 實際系統應用 48 5.1 系統架構 48 5.2 表情系統測試 49 第六章 結論與未來研究方向 50 參考文獻 51 附錄 55 A 利用擷取嘴巴區域的GABOR特徵進行表情辨識 55 B 人臉偵測與眼睛偵測流程圖 60 C GAMMA CORRECTION AND DIFFERENCE OF GAUSSIAN FILTERING 61 D 支持向量機SVM 62 E 影像積分 64

    [1] P. Lucey, J.F. Cohn, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” in Proc. IEEE Conf., Computer Vision and Pattern Recognition Wrokshops, pp. 94–101, Jun. 2010.
    [2] M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,’’ in Proc. Third IEEE Int. Conf. ,Automatic Fare and Gesture Recognition, pp. 200-205, Apr. 1998.
    [3] M. Pantic, M. Valstar, R. Rademaker, L. Maat, ”Web-based database for facial expression analysis” in Proc. Int. Conf. Multimedia and Expo (ICME05),Jul. 2005.
    [4] L. Yin, X. Wei, Y. Sun, J. Wang, M. Rosato, ”A 3d facial expression database for facial behavior research.” in Proc. Int. Conf. Automatic Face and Gesture Recognition, pp. 211–216, Apr. 2006.
    [5] L. Yin, X. Chen, Y. Sun, T. Worm, M. Reale, ”A high-resolution 3d dynamic facial expression database. “ in Proc. Int. Conf. Automatic Face and Gesture Recognition, pp. 1-6, Sept. 2008.
    [6] W. Gao, B. Cao, S. Shan, X. Chen, D. Zhou, X. Zhang, and D. Zhao, “The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations”, IEEE Trans, Systems, Man and Cybernetics(Part A), Jan. 2008.
    [7] P. Ekman and W.V. Friesen, The Facial Action Coding System: A Technique for The Measurement of Facial Movement, San Francisco: Consulting Psychologists Press, 1978.
    [8] Y. Tiang, T. Kanade, and J. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23(2), pp. 97–115, Feb. 2001.
    [9] M. Pantic and I. Patras, “Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences,” IEEE Trans. Systems, Man and Cybernetics-Part B, vol. 36(2), pp. 433–449, Apr. 2006.
    [10] I. Kotsia, S. Zafeiriou, and I. Pitas, “Novel multiclass classifiers based on the minimization of the within-class variance,” IEEE Trans. Neural Networks, vol. 20, pp. 14–34, Jan. 2009.
    [11] A. B. Ashraf, S. Lucey, and T. Chen, “Reinterpreting the application of gabor filters as a manipulation of the margin in linear support vector machines,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32(7), pp. 1335–1341, Jul. 2010.
    [12] Z. Zhang, M. Lyons, M. Schuster, S. Akamatsu,” Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perception.”, in Proc. Int. Conf. Automatic Face and Gesture Recognition, pp. 454–459, Apr 1998.
    [13] F. Dornaika and F. Davoine, “Simultaneous facial action tracking and expression recognition using a particle filter,” in Proc. Int. Conf. International Conference on Computer Vision, vol. 2, pp. 1733–1738, Oct. 2005.
    [14] I. Kotsia and I. Pitas, “Facial expression recognition in image sequences using geometric deformation features and support vector machines,” IEEE Trans. Image Processing, vol. 16, pp. 172–187, Jan. 2007.
    [15] N. Vretos, N. Nikolaidis, and I. Pitas, “A model-based facial expression recognition algorithm using principal components analysis,” in Proc. Int. Conf. Image Processing, pp. 3301–3304, Nov. 2009.
    [16] C. Shan, S. Gong, and P. McOwan, “Facial expression recognition based on local binary patterns: a comprehensive study,” IEEE Trans. Image and Vision Computing, vol. 27, pp. 803–816, May 2009.
    [17] P. Yang, Q. Liu, and D.N. Metaxas, “Rankboost with l1 regularization for facial expression recognition and intensity estimation,” in Proc. Int. Conf. Computer Vision, pp. 1018–1025, Sept. 2009-Oct. 2009.
    [18] W. Gu, C. Xiang, Y. V. Venkatesh, D. Huang, and H. Lin, “Facial expression recognition using radial encoding of local gabor features and classifier synthesis,” ELSEVIER Trans. Pattern Recognition, vol. 45(1), pp. 80–91, Jan. 2012.
    [19] L. Ding and A.M. Martinez, “Features versus context: An approach for precise and detailed detection and delineation of faces and facial features,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32(11), pp. 2022–2038, Nov. 2010.
    [20] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE Trans. Image Processing, vol. 19, pp. 1635– 1650, Jun. 2010.
    [21] W. Zheng, X. Zhou, C. Zou, L. Zhao, Facial expression recognition using kernel canonical correlation analysis (KCCA), IEEE Trans. Neural Networks, vol. 17(1), 233–238 , Jan. 2006.
    [22] M. Kyperountas, A. Tefas, I. Pitas, Salient feature and reliable classifier selection for facial expression classification, Pattern Recognition, vol. 43(3),pp. 972–986, Mar. 2010.

    [23] L.H. He, C.R. Zou, L. Zhao, D. Hu, An enhanced LBP feature based on facial expression recognition, in Proc. Int. Conf. Medicine and Biology 27th Annual Conference, pp. 3300–3303, Jan. 2006.
    [24] D. H. Kim, S. U. Jung and M. J. Chung, “Extension of cascaded simple feature based face detection to facial expression recognition,” Pattern Recognition Letters, vol.29, pp. 1621-1631, Aug. 2008.
    [25] P. Viola, M. Jones, “Robust Real-Time Face Detection,” Int. J. Computer Vision, vol. 57, no. 2, pp. 137-154, May. 2004.
    [26] P. Wanga, F. Barrettb, E. Martin, M. Milonova, R. E. Gur, R. C. Gur, C. Kohler, and R. Verma, “Automated video-based facial expression analysis of neuropsychiatric disorders,” Neuroscience Methods, vol. 168, pp. 224-238, Feb. 2008.
    [27] W. Sun and Q. Ruan, “Two-dimension PCA for facial expression recognition,” in Proc. Int. Conf. Signal Processing, vol.3, Nov. 2006.

    QR CODE