簡易檢索 / 詳目顯示

研究生: 蔡明哲
Ming-Che Tsai
論文名稱: 基於深度學習之臉部表情識別應用於主觀情緒
Facial Expression Recognition Based On Deep Learning Applied To Subjective Emotions
指導教授: 林久翔
Chiuhsiang Joe Lin
口試委員: 王孔政
Kung-Jeng Wang
曹譽鐘
Yu-Chung Tsao
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 55
中文關鍵詞: 臉部表情
外文關鍵詞: HRNet
相關次數: 點閱:143下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今科技日新月異的變化之下,人臉偵測的技術不斷進步,在人臉辨識的基礎上,辨識臉部的表情狀態是近年來研究的重點,人臉情緒辨識技術具有非侵入性的特點,可以從人們的臉部表情中捕捉情緒訊息,而無需其他生理傳感器或設備。這使得該技術易於應用於現實生活中的場景,例如情感分析、市場研究、安全監控、教學環境和健康監測等。
    本研究透過兩階段的人臉標記演算法,HRNet (High-Resolution Network) 以及GCN (Graph Convolutional Network) ,並與受測者在問卷所填寫的主觀認知情緒進行比對,招募了50位受測者來參與實驗,受測者需要觀看8種不同情緒類別的影片,並在觀看完每一部影片時填寫問卷,在實驗過程中透過攝影設備蒐集受測者的臉部表情資料當作臉部資料集進行預測模型的訓練,探討基於不同分類方式的預測模型所得到的結果。
    研究結果顯示兩階段的人臉標記演算法在臉部公開資料集的測試,獲得82%的臉部情緒辨識準確度。基於「新亞洲文化標準化情感電影資料庫」的預設影片標籤作為分類,以受測者的臉部表情獲得47.2%的臉部情緒辨識準確度,以受測者的主觀認知情緒作為情緒的分類標籤,以受測者的臉部表情獲得60.1%的臉部情緒辨識準確度。此預測模型以臉部標記點的方式來做臉部情緒的預測分類,不會受到受測者的性別、膚色的限制所影響,期望本研究的兩階段的人臉標記演算法能為臉部表情的情緒辨識帶來貢獻。


    With the rapid changes of today's technology, the technology of face detection is constantly improving. On the basis of face recognition, identifying the state of facial expressions is the focus of research in recent years. Face emotion recognition technology has the characteristics of non-invasiveness, which can capture emotional information from people's facial expressions without the need for other physiological sensors or devices. This makes the technology easy to apply to real-life scenarios, such as sentiment analysis, market research, security monitoring, teaching environment, and health monitoring, etc.
    In this study, a two-stage face labeling algorithm, HRNet (High-Resolution Network) and GCN (Graph Convolutional Network), was compared with the subjective cognitive emotions filled in by the subjects in the questionnaire, and 50 subjects were recruited Participants participated in the experiment. The subjects were required to watch 8 videos of different emotional categories and fill out the questionnaire after watching each video. The data set is used to train the prediction model, and the results obtained by the prediction model based on different classification methods are discussed.
    The results of the study show that the two-stage face tagging algorithm was tested on the face public dataset, and the accuracy of facial emotion recognition was 82%. Based on the preset video tags of the "New Asian Culture Standardized Emotional Movie Database" as the classification, the accuracy of facial emotion recognition is 47.2% based on the facial expressions of the subjects, and the subjective cognitive emotions of the subjects are used as the classification of emotions Tags, the accuracy of facial emotion recognition was 60.1% based on the facial expressions of the subjects. This prediction model uses facial markers to predict and classify facial emotions, and will not be affected by the gender and skin color of the subject. It is expected that the two-stage face marker algorithm in this study can provide Contributions to emotion recognition of facial expressions.

    摘要 i Abstract ii 致謝 iii 目錄 iv 圖目錄 vi 表目錄 viii 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 1.3 研究流程架構 3 第二章 文獻探討 5 2.1 臉部識別與應用 5 2.2 臉部情緒的分類 6 2.3 情緒誘發 7 2.4 臉部辨識演算法 8 2.4.1 VGG 8 2.4.2 HRNet 8 2.4.3 GCN 9 2.5 情緒識別研究的文化差異 9 2.6 臉部情緒識別的資料集 10 第三章 研究方法 13 3.1 研究設計 13 3.2 受測者 13 3.3 實驗設備 13 3.3.1 桌上型電腦 17 3.3.2 Face Crop Jet 17 3.4 實驗任務與流程 18 3.5 資料處理與分析方法 20 第四章 研究結果與分析 28 4.1 模型預測結果 29 4.2 基於主觀認知情緒的分類 31 第五章 結論與建議 34 5.1 實驗結果分析 35 5.2 未來發展與建議 36 參考文獻 38 附錄一 參與研究同意書 41 附錄二 情緒感受自我評估量表 44

    Anthony, A. A., &Patil, C. M. (2023). Speech Emotion Recognition Systems: A Comprehensive Review on Different Methodologies. Wireless Personal Communications. https://doi.org/10.1007/s11277-023-10296-5
    Atabansi, C. C., Chen, T., Cao, R., &Xu, X. (2021). Transfer Learning Technique with VGG-16 for Near-Infrared Facial Expression Recognition. Journal of Physics: Conference Series, 1873(1). https://doi.org/10.1088/1742-6596/1873/1/012033
    Cheng, C.-F., &Lin, C. J. (2023). Building a Low-Cost Wireless Biofeedback Solution: Applying Design Science Research Methodology. Sensors, 23(6), 2920. https://doi.org/10.3390/s23062920
    Cuzzocrea, F., Gugliandolo, M. C., Cannavò, M., &Liga, F. (2023). Emotion recognition in individuals wearing facemasks: a preliminary analysis of age-related differences. Current Psychology, 1–4. https://doi.org/10.1007/s12144-023-04239-3
    Deng, Y., Yang, M., &Zhou, R. (2017). A new standardized emotional film database for Asian culture. Frontiers in Psychology, 8(November). https://doi.org/10.3389/fpsyg.2017.01941
    Dong, C., Wang, R., &Hang, Y. (2021). Facial expression recognition based on improved VGG convolutional neural network. Journal of Physics: Conference Series, 2083(3). https://doi.org/10.1088/1742-6596/2083/3/032030
    Ekman, P. (1992). Facial Expressions of Emotion: New Findings, New Questions. Psychological Science, 3(1), 34–38. https://doi.org/10.1111/j.1467-9280.1992.tb00253.x
    Elfenbein, H. A., Beaupré, M., Lévesque, M., &Hess, U. (2007). Toward a dialect theory: Cultural differences in the expression and recognition of posed facial expressions. Emotion, 7(1), 131–146. https://doi.org/10.1037/1528-3542.7.1.131
    Gross, J. J., &Levenson, R. W. (1995). Emotion Elicitation using Films. Cognition and Emotion, 9(1), 87–108. https://doi.org/10.1080/02699939508408966
    Huang, J., Zhu, Z., &Huang, G. (2019). Multi-Stage HRNet: Multiple Stage High-Resolution Network for Human Pose Estimation. 2–5. http://arxiv.org/abs/1910.05901
    Liu, D., Zhang, H., &Zhou, P. (2020). Video-based facial expression recognition using graph convolutional networks. Proceedings - International Conference on Pattern Recognition, 1, 4198–4205. https://doi.org/10.1109/ICPR48806.2021.9413094
    Long, H., Peluso, N., Baker, C. I., Japee, S., &Taubert, J. (2023). A database of heterogeneous faces for studying naturalistic expressions. Scientific Reports, 13(1), 5383. https://doi.org/10.1038/s41598-023-32659-5
    Markus, H. R., Cross, S., Fiske, A., Gilligan, C., Givon, T., Kanagawa, C., Kihlstrom, J., &Miller, J. (2020). Culture and self. Handbook of Cultural Sociology, 98(2), 247–256. https://doi.org/10.4324/9780203891377-32
    Matsumoto, D., &Hwang, H. S. (2012). Culture and emotion: The integration of biological and cultural contributions. Journal of Cross-Cultural Psychology, 43(1), 91–118. https://doi.org/10.1177/0022022111420147
    Pahl, J., Rieger, I., Möller, A., Wittenberg, T., &Schmid, U. (2022). Female, white, 27? Bias Evaluation on Data and Algorithms for Affect Recognition in Faces. ACM International Conference Proceeding Series, 973–987. https://doi.org/10.1145/3531146.3533159
    Pettersson, I., Lachner, F., Frison, A. K., Riener, A., &Butz, A. (2018). A bermuda triangle? - A review of method application and triangulation in user experience evaluation. Conference on Human Factors in Computing Systems - Proceedings, 2018-April, 1–16. https://doi.org/10.1145/3173574.3174035
    Sagonas, C., Antonakos, E., Tzimiropoulos, G., Zafeiriou, S., &Pantic, M. (2016). 300 Faces In-The-Wild Challenge: database and results. Image and Vision Computing, 47, 3–18. https://doi.org/10.1016/j.imavis.2016.01.002
    Salunke, V.V. (2017). A New Approach for Automatic Face Emotion Recognition and Classification Based on Deep Networks.
    Seibert, P. S., &Ellis, H. C. (1991). A convenient self-referencing mood induction procedure. Bulletin of the Psychonomic Society, 29(2), 121–124. https://doi.org/10.3758/BF03335211
    Shehu, H. A., Browne, W. N., &Eisenbarth, H. (2022). An anti-attack method for emotion categorization from images[Formula presented]. Applied Soft Computing, 128, 109456. https://doi.org/10.1016/j.asoc.2022.109456
    Song, B. C., &Kim, D. H. (2021). Hidden Emotion Detection using Multi-modal Signals. 1–7. https://doi.org/10.1145/3411763.3451721
    Tao, J., Tan, T., &Picard, R. W. (2005). Affective computing and intelligent interaction : first international conference, ACII 2005, Beijing, China, October 22-24, 2005 : proceedings. In Lecture notes in computer science,. http://springerlink.metapress.com/openurl.asp?genre=issue&issn=0302-9743&volume=3784 Restricted to SpringerLink subscribers%5CnPublisher description http://www.loc.gov/catdir/enhancements/fy0663/2005934588-d.html%0Ahttp://springerlink.metapress.com/openur
    Teixeira, A., Aguiar, E.De, Souza, A. F.De, &Oliveira-santos, T. (2017). Facial expression recognition with Convolutional Neural Networks : Coping with few data and the training sample order. Pattern Recognition, 61, 610–628. https://doi.org/10.1016/j.patcog.2016.07.026
    Thüring, M., &Mahlke, S. (2007). Usability, aesthetics and emotions in human-technology interaction. International Journal of Psychology, 42(4), 253–264. https://doi.org/10.1080/00207590701396674
    Tsai, J. L., Simeonova, D. I., &Watanabe, J. T. (2004). Somatic and social: Chinese Americans talk about emotion. Personality and Social Psychology Bulletin, 30(9), 1226–1238. https://doi.org/10.1177/0146167204264014
    Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., Liu, W., &Xiao, B. (2020). Deep High-Resolution Representation Learning for Visual Recognition. March, 1–23.
    Westermann, R., Spies, K., Stahl, G., &Hesse, F. W. (1996). Relative effectiveness and validity of mood induction procedures: A meta-analysis. European Journal of Social Psychology, 26(4), 557–580. https://doi.org/10.1002/(SICI)1099-0992(199607)26:4<557::AID-EJSP769>3.0.CO;2-4
    Winyangkun, T., Vanitchanant, N., Chouvatut, V., &Panyangam, B. (2023). Real-Time Detection and Classification of Facial Emotions. 15th International Conference on Knowledge and Smart Technology, KST 2023. https://doi.org/10.1109/KST57286.2023.10086866
    Zhang, W. K., &Kang, M. J. (2019). Factors affecting the use of facial-recognition payment: An example of Chinese consumers. IEEE Access, 7, 154360–154374. https://doi.org/10.1109/ACCESS.2019.2927705

    無法下載圖示 全文公開日期 2028/06/30 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE