簡易檢索 / 詳目顯示

研究生: 姚振傑
Chen-Chieh Yao
論文名稱: 基於臉部特徵點之混合分類式欺騙檢測系統
A Hybrid Deception Recognition System Based on Facial Landmarks
指導教授: 郭景明
Jing-Ming Guo
口試委員: 宋啟嘉
Chi-Chia Sun
王乃堅
Nai-Jian Wang
夏至賢
Chih-Hsien Hsia
劉雲夫
Yun-Fu Liu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 107
中文關鍵詞: 欺騙檢測臉部行為視覺線索隨機森林最小均方濾波
外文關鍵詞: Deception detection, Facial behavior, Visual clues, Random Forest, Least mean squares
相關次數: 點閱:259下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在視訊監控的應用中,基於臉部特徵的欺騙檢測為具有挑戰性的重要議題。本論文提出了一種基於臉部特徵點之混合分類式欺騙檢測系統,並且應用於分辨識欺騙和事實。
方法上先使用隨機森林分類器來擷取臉部特徵點,在臉部特徵點擷取方面,透過改進隨機森林的分割法則與特徵的擷取,使其能更準確的將樣本分群,並且能抵抗光源、角度等外在因素帶來的影響
在特徵擷取方面,利用臉部特徵點用於分析臉部動作模組、臉部顏色資訊、虹膜移動資訊以上三種特徵彼此間的搭配組合,為了更好地加強檢測欺騙方法,本論文採用最小均方濾波器訓練,以提高提取特徵的強健性。最後透過預先訓練的最小均方濾波器和支持向量機的組合,使其更準確的檢測欺騙以及真相。
在實驗結果方面,本論文使用Real-Life資料庫與自行收集的MSP-YTD資料庫分別進行測試並與前人技術比較,儘管影片中存在不受控制的因素,如照明,頭部姿勢和臉部遮蔽,但從結果可看出所提出的算法皆有良好的準確率,也因此可被應用於現實生活中。


Facial deception detection is becoming a challenging problem for automatic inspection of surveillance videos. In this thesis, we propose a novel algorithm for differentiating the deception and truth based on visual clues. The Random Forest classifier is applied to track the facial landmark points, which is utilized to analyze the facial action unit based on the movement of the facial feature points. In addition, the biological and geometrical features are also considered, and the sequential forward floating selection (SFFS) is integrated to select the best feature combinations. The proposed method employs least-mean-square filter to significantly improve the robustness of the extracted features. To verify the extracted features for deception and truth identification, the pre-trained least-mean-square filters and the Support Vector Machine (SVM) are utilized. Experimental results demonstrated that despite the uncontrolled factors, illumination, head pose and facial of sheltering, in the videos, the proposed method is consistent in achieving promising performance compared to that of the former schemes.

中文摘要I AbstractII 誌謝III 目錄IV 圖表索引VI 第一章 緒論1 1.1 研究背景與動機1 1.2 論文架構3 第二章 文獻探討4 2.1 特徵擷取相關文獻5 2.1.1 情緒辨識相關研究5 2.1.2 臉部動作編碼系統(Facial Action Coding System)12 2.2 挑選特徵演算法相關文獻16 2.2.1 降低特徵維度(Dimensionality Reduction)16 2.2.2 循序前進演算法(Sequential Forward Selection, SFS)19 2.2.3 循序後退演算法(Sequential Backward Selection, SBS)20 2.2.4 循序前進浮動演算法(Sequential Forward Floating Selection, SFFS)21 2.3 臉部特徵點擷取相關文獻22 2.4 欺騙檢測相關文獻32 第三章 臉部欺騙檢測38 3.1 系統架構39 3.2 前處理41 3.2.1 人臉偵測(Face Detection)41 3.2.2 感興趣之區域擷取(Region Of Interest, ROI)44 3.3 特徵擷取48 3.3.1 生物特徵(Biological Features)48 3.3.2 幾何特徵(Geometric features)49 3.3.3 臉部動作單元(Facial Action Unit)52 3.4 支持向量機(Support Vector Machine)69 3.4.1 線性可分離(Linearly separable)70 3.4.2 線性不可分離(Linearly non-separable)73 3.5 最小均方濾波器(Least-mean-square filter)75 第四章 實驗結果80 4.1 參數最佳化80 4.2 資料庫82 4.3 他人技術比較84 4.4 時間複雜度88 第五章 結論與未來展望90 參考文獻91

[1]P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. Computer Vision and Pattern Recognition, pp. 1-9, 2001.
[2]J. M. Guo, S. H. Tseng, and K. Wong, “Accurate Facial Landmark Extraction,” in IEEE Signal Processing Letters, vol. 23, no. 5, pp. 605-609, 2016.
[3]P. Ekman and W. Friesen, “Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto: Consulting Psychologists Press,” 1978.
[4]S. Porter, L. ten Brinke, B. Wallace, “Secrets and lies: involuntary leakage in deceptive facial expressions as a function of emotional intensity,” in Journal of Nonverbal Behavior, vol. 36, no. 1, pp.23-27, 2012.
[5]L. ten Brinke, S. Porter, A. Baker, “Darwin the detective: observable facial muscle contractions reveal emotional high-stakes lies,” in Evolution and Human Behavior, vol. 33, no. 4, pp.411–416, 2012.
[6]V. Perez-Rosas, M. Abouelenien, R. Mihalcea, M. Burzo, “Deception Detection using Real-life Trial Data,” in Proceedings of the 2015 ACM International Conference on Multimodal Interaction, pp.59-66, 2015.
[7]M. Jaiswal, S. Tabibu, and R. Bajpai, “The Truth and Nothing but the Truth: Multimodal Analysis for Deception Detection,” in IEEE International Conference on Data Mining Workshops, pp.938-943, 2017.
[8]F. Charles, Jr. Bond and M. Bella DePaulo, “Accuracy of Deception Judgments,” in Personality and Social Psychology Review, vol. 10, no. 3, pp. 214-234, 2006.
[9]Scientific Validity of Polygraph Testing: A Research Review and Evaluation—A Technical Memorandum (Washington, D. C.: U.S. Congress, Office of Technology Assessment, OTA-TM-H-15, November 1983)
[10]A. R, "Detecting Deception". Monitor on Psychology, vol. 37, no. 7, pp. 70, 2004.
[11]F. A. Kozel et al., “A pilot study of functional magnetic resonance imaging brain correlates of deception in healthy young men,” J. Neuropsychiatry Clin. Neurosci, vol. 16, no. 3, pp. 295–305, Aug. 2004.
[12]“Education psychologists use eye-tracking method for detecting lies,” psychologialscience.org. Retrieved 26 April 2012.
[13]F. Horvath, J. McCloughan, D. Weatherman, and S. Slowik, “The Accuracy of auditors' and layered voice Analysis (LVA) operators' judgments of truth and Deception During Police Questioning,” in Journal of forensic sciences, vol. 58, no. 2, pp.385-392, 2013.
[14]K. R. Damphousse, “Voice stress analysis: Only 15 percent of lies about drug use detected in field test,” in NIJ Journal, vol. 8, no. 12, pp.8–12, 2008.
[15]J. D. Harnsberger, H. Hollien, C. A. Martin, and K. A. Hollien, “Stress and Deception in Speech: Evaluating Layered Voice Analysis,” in Journal of forensic sciences, vol. 54, no. 3, pp.642–650, 2009.
[16]H. Hollien, J. D. Harnsberger, Martin, and C. A., K. A. Hollien, “Evaluation of the NITV CVSA,” in Journal of forensic sciences, vol. 53, no. 1, pp.183–193, 2008.
[17]M. Owayjan, et al., “The design and development of a lie detection system using facial micro-expressions.,” IEEE 2nd International Conference on Advances in Computational Tools for Engineering Applications (ACTEA) , pp. 33-38, 2012.
[18]L. Su, and D. L. Martin. “High-stakes deception detection based on facial expressions,” IEEE 22nd International Conference on Pattern Recognition (ICPR), pp. 2519-2524, 2014.
[19]M. F. Valstar, et al., “Fera 2015-second facial expression recognition and analysis challenge,” 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) , vol. 6, pp. 1-8, 2015.
[20]T. Baltrušaitis, M. Mahmoud, and P. Robinson, “Cross-dataset learning and person-specific normalisation for automatic action unit detection,” 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp.1-6, 2015.
[21]Y. I. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” in IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 2, pp. 97-115, 2001.
[22]T. Kanade, J. F. Cohn and Y. Tian, “Comprehensive database for facial expression analysis,” Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000.
[23]E. Wood, T. Baltrusaitis, X. Zhang, Y. Sugano, P. Robinson, and A. Bulling. “Rendering of eyes for eye-shape registration and gaze estimation,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3756-3764, 2015.
[24]T. Baltrusaitis, L. P. Morency, and P. Robinson. “Constrained local neural fields for robust facial landmark detection in the wild,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp.354-361, 2013.
[25]D. Cristinacce, and T. F. Cootes, “Feature detection and tracking with constrained local models,” in BMVC, vol. 1, no. 2, 2006.
[26]L. Swirski, A. Bulling, and N. Dodgson, “Robust real-time pupil tracking in highly off-axis images,” in Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 173-176, 2012.
[27]T. Ojala, M. Pietikinen and D. Harwood, “A comparative study of texture measures with classification with local binary patterns,” in Pattern Recognition, vol. 29, no. 1, 1996.
[28]T. Ojala, M. Pietikainen and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, Jul. 2002.
[29]X. Cao, Y. Wei, F. Wen, and J. Sun, “Face alignment by explicit shape regression,” in International Journal of Computer Vision , vol. 107, no. 2, pp. 177-190, 2014.
[30]P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman and N. Kumar, “Localization parts of faces using a consensus of exemplars,” in IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 12, pp. 2930-2940, 2013.
[31]X. Xiong and F. D. L. Torre, “Supervised Descent Method and its Applications to Face Alignment,” in IEEE Computer Vision and Pattern Recognition (CVPR), pp. 532-539, June, 2013.
[32]D. G. Lowe, “Object Recognition from Local Scale-Invariant Features,” in Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol. 2, pp. 1150-1157, 1999.
[33]P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman and N. Kumar, “Localization parts of faces using a consensus of exemplars,” in IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 12, pp. 2930-2940, 2013.
[34]X. P. Burgos-Artizzu, P. Perona, and P. Dollar, “Robust face landmark estimation under occlusion,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1513-1520, 2013.
[35]S. Zhu, C. Li, C. C. Loy and X. Tang, “Face Alignment by Coarse-to-Fine Shape Searching,” in Proceedings of the IEEE International Conference on Computer Vision, 2015.
[36]P. Pudil, J. Novovicova and J. Kittler, “Floating search methods in feature selection,” in Pattern Recognition Letters, vol. 15, pp. 1119-1125, Nov. 1994.
[37]R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang and C. J. Lin, “LIBLINEAR: A Library for Large Linear Classification,” in Journal of Machine Learning Research, pp. 1871-1874, Aug. 2008.
[38]C. Cortes and V. Vapnik, “Support-vector networks,” in Machine Learning, vol. 20, no. 3, pp. 273-297, Sep. 1995.
[39]N. Dalal and B. Triggs, “Histogram of Oriented Gradients for Human Detection,” in IEEE Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 886-893, June 2005.
[40]R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-pie,” in Image and Vision Computing, vol. 28, no. 5, pp. 1-8, Sep. 2010.
[41]P. Liu, J. M. Guo, et al., “Ocular Recognition for Blinking Eyes,” in IEEE Transactions on Image Processing, 2017.
[42]S. A. Huettel, W. S. Allen, and M. Gregory. Functional magnetic resonance imaging. Vol. 1. Sunderland: Sinauer Associates, 2004.
[43]J. W. Pennebaker, E. F. Martha, and J. B. Roger. “Linguistic inquiry and word count: LIWC 2001,” in Mahway: Lawrence Erlbaum Associates71.2001 (2001): 2001.
[44]J. Allwood, et al., “The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena,” in Language Resources and Evaluation, vol. 41, no. 3, pp. 273-287, 2007.
[45]E. Cambria, et al., “SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives,” in COLING, pp. 2666-2677, 2016.
[46]S. Poria, et al., “Merging SenticNet and WordNet-Affect emotion lists for sentiment analysis,” IEEE 11th International Conference on Signal Processing (ICSP), vol. 2, pp. 1251-1255, 2012.
[47]S. Poria, et al., “Enriching SenticNet polarity scores through semi-supervised fuzzy clustering,” 12th International Conference on Data Mining Workshops (ICDMW), pp. 709-716, 2012.
[48]F. Eyben et al., “Recent developments in opensmile, the munich open-source multimedia feature extractor,” in Proceedings of the 21st ACM international conference on Multimedia (ACM), pp. 835-838, 2013.
[49]Z. Zhang, et al., “Real-time automatic deceit detection from involuntary facial expressions,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-6, 2007.
[50]R. P. Fisher, and R. E. Geiselman. “The cognitive interview method of conducting police interviews: Eliciting extensive information and promoting therapeutic jurisprudence,” in International journal of law and psychiatry, vol. 33, no.5, pp. 321-328, 2010.
[51]S. Ren, X. Cao, Y. Wei and J. Sin, “Face Alignment at 3000 FPS via Regression Local Binary Features, ” in IEEE Computer Vision and Pattern Recognition (CVPR), pp. 23-28, June 2014.
[52]L. Breiman, “Random Forests,” in Machine Learning, vol. 45, pp. 5-32, Oct. 2001.
[53]V. Kazemi and J. Sullivan, “One Millisecond Face Alignment with an Ensemble of Regression Trees,” in IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1867-1874, June 2014.

QR CODE