簡易檢索 / 詳目顯示

研究生: 張冕資
Mian-Zih Jhang
論文名稱: 使用歌詞以及階層群集分析方法的華語流行歌曲情緒辨識
An Emotion Recognition Method for Mandarin Popular Song using Lyrics and Hierarchical Clustering
指導教授: 楊朝龍
Chao-Lung Yang
林承哲
Cheng-Jhe Lin
口試委員: 楊朝龍
Chao-Lung Yang
林承哲
Cheng-Jhe Lin
黃俊堯
Chun-Yao Huang
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 56
中文關鍵詞: 音樂情緒辨識文字探勘華語流行音樂
外文關鍵詞: Music Emotion Recognition, Text Mining, Mandarin Popular Music
相關次數: 點閱:593下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年,網際網路的蓬勃發展以及行動裝置的普及除了引發數位音樂市場的巨大商機之外,同時造成對於音樂資訊檢索(Music Information Retrieval, MIR)技術的強烈需求。如果可以使用情緒對於音樂進行檢索,即將顯著提升音樂檢索系統的完備性以及使用的方便性。然而,現存的音樂情緒檢索系統仍然存在許多美中不足的地方,不是情緒的類別不夠完備;就是沒有反映歌曲的多重情緒。因此,本研究旨在建構一套可以提供音樂的情緒變化曲線、情緒成份以及主要情緒的音樂情緒辨識系統。本研究首先透過環狀情緒模型(circumplex model)的情緒傾向(valence)以及激發水準(arousal)對於華語歌詞的個別句子進行文字情緒萃取,進而得到整首歌曲的情緒變化曲線。接著利用階層群集(hierarchical clustering)分析方法對於句子的情緒進行分群,獲得歌曲的情緒成份。最後,根據歌曲對於不同情緒的描述篇幅求得歌曲當中不同情緒的百分比。為了驗證方法的準確程度,本研究蒐集KKBOX歷年年度點播風雲榜Top 5的歌曲,並且利用人員標示的方式獲得歌曲情緒資料集MandoPop60。針對MandoPop60當中所有歌曲,比較本研究方法的輸出以及人類標示人員的情緒位於情緒傾向以及激發水準空間當中的誤差。研究發現若干歌曲特徵對於歌曲情緒辨識準確程度的影響最為顯著。相關研究結果可以提供使用歌詞作為基礎的情緒辨識系統的建構參考。


    Recently, the skyrocketing development of Internet and the rising availability of mobile device cause not only digital music industry unprecedented commercial opportunities, but also the strong demands for the techniques of Music Information Retrieval (MIR). Researchers have tried to extract emotion terms from music based on MIR technology. However, due to the incompleteness of the emotion category and the absence of multiple emotion for users, there is room for improvement. This study aims at developing a novel MIR system which extracts one emotion term per sentence in a Mandarin song by the circumplex model which allocates the emotion term in valence and arousal space. A novel method to describe a song by a curve of emotional change can be constructed based on the extracted emotion terms. In this research, first, lyrics of songs were analyzed by the method of sentiment analysis to obtain the curve of emotional change. Then, the hierarchical clustering method was applied to cluster the emotions of a song to obtain the emotion components. Last, based on the extracted emotion components of a song, the proportion of different emotions in each song was calculated. The main emotion component of a song can be identified based on the emotion proportion. In order to validate the accuracy of the method, annual Top 5 songs played most frequently in the past 12 years on KKBOX, were collected as a song dataset, denoted as MandoPop60. The emotions of each song in MandoPop60 were annotated by human annotators. Comparing the extracted emotion terms by the proposed MIR method with the terms by human, the main characteristics of a song can be identified. Research discovered that few song characteristics significantly influence the degree of accuracy of music emotion recognition.

    致謝 i 中文摘要 ii Abstract iii 目錄 iv 圖目錄 vi 表目錄 vii 壹、緒論 1 一、研究背景以及動機 1 二、研究目的 3 三、研究架構 4 貳、文獻探討 5 一、情緒表示模型 5 (一)離散類別模型 5 (二)連續數值模型 7 (三)離散類別模型以及連續數值模型的轉換 8 二、流行音樂 9 (一)中文流行音樂 9 (二)流行音樂的歌曲結構 10 三、音樂情緒辨識 11 (一)情緒分析 11 (二)情緒分析的級別 13 (三)音樂情緒變化偵測 13 四、階層群集分析方法 15 參、研究方法 17 一、系統架構 17 二、資料蒐集 17 三、資料預處理 20 (一)情緒語料預處理 21 (二)歌詞資料預處理 21 四、歌詞句子情緒分析 22 五、歌曲情緒成份以及主要情緒分析 25 肆、實驗結果 27 一、建構MandoPop60華語流行音樂情緒資料集 27 二、中文種類對於情緒分析結果的影響 27 三、歌詞句子情緒分析 31 四、歌曲情緒成份分析以及主要情緒預測 34 伍、結論以及未來發展 40 一、結論 40 二、未來發展 40 參考文獻 43

    Abjorensen, N. (2017). Historical Dictionary of Popular Music. Lanham, MD: Rowman & Littlefield Publishers.
    Bradley, A. (2009). A Language Of Emotion: What Music Does And How It Works. Bloomington, IN: AuthorHouse.
    Caetano, M., Mouchtaris, A., & Wiering, F. (2012, June). The role of time in music emotion recognition: Modeling musical emotions from time-varying music features. Paper presented at the International Symposium on Computer Music Modeling and Retrieval (CMMR 2012), London, UK.
    Davis, S. (1983). HC Koch, the classic concerto, and the sonata-form retransition. The Journal of Musicology, 2(1), 45-61.
    Hevner, K. (1935). Expression in music: a discussion of experimental studies and theories. Psychological review, 42(2), 186.
    Hu, X., & Downie, J. S. (2007, September). Exploring Mood Metadata: Relationships with Genre, Artist and Usage Metadata. Paper presented at the The International Society of Music Information Retrieval (ISMIR 2007), Vienna, Austria.
    Hu, X., & Yang, Y. H. (in press). The mood of Chinese Pop music: Representation and recognition. Journal of the Association for Information Science and Technology.
    Hu, Y., Chen, X., & Yang, D. (2009, October). Lyric-based Song Emotion Detection with Affective Lexicon and Fuzzy Clustering Method. Paper presented at the International Society for Music Information Retrieval Conference (ISMIR 2009), Kobe, Japan.
    International Federation of Phonographic Industry. (2016). IFPI Global Music Report 2016. Retrieved from http://www.ifpi.org/news/IFPI-GLOBAL-MUSIC-REPORT-2016
    Irtel, H. (2008). The PXLab Self-Assessment-Manikin Scales. Retrieved from http://irtel.uni-mannheim.de/pxlab/demos/index_SAM.html
    Knees, P., & Schedl, M. (2016). Music Similarity and Retrieval: An Introduction to Audio- and Web-based Strategies. Berlin/Heidelberg, Germany: Springer.
    Li, S., Lee, S. Y. M., Chen, Y., Huang, C.-R., & Zhou, G. (2010, August). Sentiment classification and polarity shifting. Paper presented at the International Conference on Computational Linguistics (COLING 2010), Beijing, China.
    Malheiro, R., Oliveira, H. G., Gomes, P., & Paiva, R. P. (2016, November). Keyword-based Approach for Lyrics Emotion Variation Detection. Paper presented at the International Conference on Knowledge Discovery and Information Retrieval (KDIR 2016), Porto, Portugal.
    Malheiro, R., Panda, R., Gomes, P., & Paiva, R. P. (2016). Emotionally-Relevant Features for Classification and Regression of Music Lyrics. IEEE Transactions on Affective Computing, PP(99), 1-1.
    Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams Engineering Journal, 5(4), 1093-1113.
    Nietzsche, F. (2007). Twilight of the Idols with The Antichrist and Ecce Homo. Hertfordshire, UK: Wordsworth Editions Ltd.
    Nishikawa, N., Itoyama, K., Fujihara, H., Goto, M., Ogata, T., & Okuno, H. G. (2011, November). A musical mood trajectory estimation method using lyrics and acoustic features. Paper presented at the Proceedings of International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies (MIRUM 2011), Scottsdale, AZ.
    Paltoglou, G., & Thelwall, M. (2013). Seeing stars of valence and arousal in blog posts. IEEE Transactions on Affective Computing, 4(1), 116-123.
    Raschka, S. (2015). Python Machine Learning. Birmingham, UK: Packt Publishing.
    Reagan, A. J., Mitchell, L., Kiley, D., Danforth, C. M., & Dodds, P. S. (2016). The emotional arcs of stories are dominated by six basic shapes. EPJ Data Science, 5(1), 31.
    Russell, J. A. (1980). A Circumplex Model of Affect. J. Personality and Social Psychology, 39(6), 1161-1178.
    Tan, P.-N., Steinbach, M., & Kumar, V. (2006). Introduction to Data Mining. New York, NY: Pearson.
    Vulliamy, G., & Lee, E. (2016). Popular Music: A Teacher's Guide (Vol. 7). London, UK: Routledge.
    Wang, T., Kim, D.-J., Hong, K.-S., & Youn, J.-S. (2009, July). Music information retrieval system using lyrics and melody information. Paper presented at the Proceedings of Asia-Pacific Conference on Information Processing (APCIP 2009), Shenzhen, China.
    Yang, Y.-H., & Chen, H. H. (2012). Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology (TIST), 3(3), 40.
    Yu, H.-M., Tsai, W.-H., & Wang, H.-M. (2005, October). A query-by-singing technique for retrieving polyphonic objects of popular music. Paper presented at the Asia Information Retrieval Symposium (AIRS 2005), Jeju Island, Korea.
    Yu, L.-C., Lee, L.-H., Hao, S., Wang, J., He, Y., Hu, J., . . . Zhang, X. (2016, June). Building Chinese affective resources in valence-arousal dimensions. Paper presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2016), San Diego, CA.
    吳映嫻(2009)。歌詞情感分析於中文流行音樂情緒判斷之研究(碩士論文)。取自http://ntur.lib.ntu.edu.tw/handle/246246/185398
    李建華、劉功申、林祥(2017)。情感傾向性分析及應用研究綜述。信息安全學報,2,48-62。
    周志華(2016)。機器學習。北京市:清華大學出版社。
    林鴻飛、張冬瑜、楊亮、鄭樸琪(2015)。情感隱喻計算及其應用研究。大連理工大學學報,6,661-670。
    邸鵬、段利國(2015)。基於複雜句式的文本情感傾向性分析。計算機應用與軟件,11,57-61。
    郝雷紅(2003)。現代漢語否定副詞研究(碩士論文)。取自http://cdmd.cnki.com.cn/Article/CDMD-10028-2003085368.htm
    張成功、劉培玉、朱振方、方明(2012)。一種基於極性詞典的情感分析方法。山東大學學報(理學版),3,47-50。
    梁塽、許潔萍(2009年10月)。基於歌詞的中文流行歌曲音樂結構分析算法研究。全國和諧人機環境聯合學術會議(HHME 2009),西安。
    曾金滿(2015)。104年流行音樂產業調查報告。文化部委託之專題研究成果報告(編號:1010503116)臺北市:文化部。
    蔡振家(2013)。音樂認知心理學。臺北市:國立臺灣大學出版社。
    蔡振家、陳容姍、余思霈(2014)。解析「主歌-副歌」形式:抒情慢歌的基模轉換與音樂酬賞。應用心理研究,61,239-286。
    蔣盛益、王冬青、廖靜欣、陽垚、淘寶(2014)。基於歌詞的歌曲高潮片段自動提取。小型微型計算機系統,1,40-43。
    蔣盛益、陽垚、廖靜欣(2014)。中文音樂情感詞典構建及情感分類方法研究。計算機工程與應用,24,118-121。
    藺璜、郭姝慧(2003)。程度副詞的特點範圍與分類。山西大學學報(哲學社會科學版),2,71-74。

    無法下載圖示 全文公開日期 2022/07/13 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE