簡易檢索 / 詳目顯示

研究生: 許立璇
Li-Hsuan Hsu
論文名稱: 以情緒與場景為基礎的英文音樂推薦系統
English Music Recommendation System Based on Emotion and Scene
指導教授: 徐俊傑
Chiun-chieh Hsu
口試委員: 王有禮
Yue-li Wang
洪政煌
Cheng-Huang Hung
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 56
中文關鍵詞: 推薦系統音樂推薦特徵擷取多標籤分類器
外文關鍵詞: Recommendation system, Music Recommender, Feature Extraction, Multi-label Classification
相關次數: 點閱:208下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於網際網路的蓬勃發展,串流音樂市場也隨之成長,使我們可以隨時取得
    各種音樂。然而在眾多的歌曲中,要如何讓使用者以較低的搜尋成本獲取有興趣
    的歌曲即成為一個困難的挑戰,音樂推薦系統即擔任了重要的角色。
    本研究提出一個分析歌詞情緒與場景的英文歌曲推薦系統,利用ConceptNet
    語義網絡與NRC 情緒詞庫分別抽取歌詞的情緒詞與場景詞。首先將歌曲視作多
    種情緒主題的分布,利用隱含狄利克雷分布模型取得情緒主題機率分布作為歌曲
    的情緒特徵。且為了理解歌詞中常有的心情轉折,提出以歌詞結構為基礎的標記
    策略,將重要的副歌段落與其他段落分別訓練。同時利用TF-IDF 詞權重技術獲
    得歌詞的場景特徵,再以多標籤分類器找到有相同場景的候選歌曲。最後計算情
    緒特徵與場景特徵的相似性,找到目標歌曲的歌曲推薦清單。
    經由實驗發現,本研究提出的歌曲推薦系統相較目前方法,在推薦不同歌曲
    數下,系統合適度評價的精確率可提高6.1~20%,而在使用者系統偏好的實驗中
    能獲得高於23%的偏好比例。


    Due to the rapid growth of the Internet, the streaming music market has gradually
    grown. This trend allows us to obtain plenty of songs at any time. Nevertheless, among
    the huge amount, how to let users obtain the song they are interested in at a lower search
    cost has become a difficult challenge. The music recommendation system has played
    an important role in this issue.
    In this thesis, we propose an English music recommendation system that analyzes
    the emotion and scene context of lyrics. The ConceptNet semantic network and the
    NRC emotional vocabulary are used to extract the emotion terms and scene terms of
    the lyrics respectively. We regard song as a distribution of multiple emotional themes
    and train Labeled LDA model to obtain the emotion distribution as the emotion feature.
    Furthermore, in order to better understand the mood transitions often found in lyrics, a
    label strategy based on the lyrics structure is proposed to train chorus and other
    paragraphs separately. Meanwhile, a song uses TF-IDF word weighting technology to
    obtain the scene feature of the lyric, and then make use of the multi-label classifier to
    find candidate songs with the same scene. Finally, calculate the similarity between the
    emotion features and the scene features, and find the song recommendation list of the
    target song.
    Through experiments, under different number of songs recommended, it is found
    that our method can improve the accuracy of the system suitability evaluation by 6.1-
    20% compared with the current method. As well as the experiment of user system
    preference can be higher than 23%.

    中文摘要 ....................................................................................................................... I 英文摘要 ...................................................................................................................... II 目錄 ............................................................................................................................. III 圖索引 .......................................................................................................................... V 表索引 ......................................................................................................................... VI 第一章、緒論 .............................................................................................................. 1 1.1 研究背景 ...................................................................................................... 1 1.2 研究動機與目的 .......................................................................................... 1 1.3 論文架構 ...................................................................................................... 3 第二章、文獻探討 ...................................................................................................... 4 2.1 相關研究 ........................................................................................................... 4 2.1.1 歌詞探索相關研究文獻 ............................................................................ 4 2.1.2 以分析歌詞情緒之音樂推薦系統 ............................................................ 5 2.2 標記式隱含狄利克雷分布 ................................................................................ 7 2.3 多標籤分類器 ................................................................................................... 9 2.4 詞庫 ................................................................................................................. 13 2.5 歌詞結構 ......................................................................................................... 14 第三章、系統架構 .................................................................................................... 17 3.1 系統簡介 .......................................................................................................... 17 3.2 歌曲收集與歌詞預處理 .................................................................................. 19 3.2.1 歌曲收集 .................................................................................................. 19 3.2.2 歌詞預處理 .............................................................................................. 20 3.3 情緒特徵產生 ................................................................................................. 22 3.3.1 擷取情緒詞 .............................................................................................. 22 3.3.2 情緒主題模型 .......................................................................................... 24 3.3.3 產生情緒特徵 .......................................................................................... 25 3.4 場景特徵產生 ................................................................................................. 26 3.4.1 擷取場景詞 .............................................................................................. 26 3.4.2 產生場景特徵 .......................................................................................... 27 3.5 場景分類 ......................................................................................................... 29 3.5.1 場景類型 ................................................................................................... 29 3.5.2 場景分類與候選歌詞 ............................................................................... 30 3.6 歌曲相似度計算 .............................................................................................. 31 第四章、實驗結果與分析 ........................................................................................ 33 4.1 歌曲資料集 ..................................................................................................... 33 4.2 實驗ㄧ 多標籤分類器實驗 ............................................................................ 33 4.2.1 實驗方法 .................................................................................................. 33 4.2.2 評估指標 .................................................................................................. 34 4.2.3 實驗結果 .................................................................................................. 35 4.3 實驗二 推薦分數參數實驗 ............................................................................ 35 4.3.1 實驗方法 .................................................................................................. 35 4.3.2 評估指標 .................................................................................................. 36 4.3.3 實驗結果 .................................................................................................. 36 4.4 實驗三 系統合適度評價實驗 ........................................................................ 38 4.4.1 實驗方法 .................................................................................................. 38 4.4.2 評估指標 .................................................................................................. 39 4.4.3 實驗結果 .................................................................................................. 39 4.5 實驗四 系統偏好實驗 .................................................................................... 41 4.5.1 實驗方法 .................................................................................................. 41 4.5.2 實驗結果 .................................................................................................. 42 第五章、結論與未來研究 ........................................................................................ 45 5.1 結論 ................................................................................................................. 45 5.2 未來展望 ......................................................................................................... 46 參考文獻 .................................................................................................................... 47

    [1] S. Baccianella, A. Esuli, and F. Sebastiani, “SentiWordNet 3.0: An Enhanced
    Lexical Resource for Sentiment Analysis and Opinion Mining,” in Proceedings
    of the Seventh International Conference on Language Resources and
    Evaluation, pp. 2200-2204, 2010.
    [2] D. Blei, A. Ng, and M. Jordan, “Latent Dirichlet Allocation,” Journal of
    Machine Learning Research, vol. 3, no. 7, pp. 993-1022, 2003.
    [3] R. Cai, C. Zhang, C. Wang, L. Zhang, and W. Y. Ma, “MusicSense: contextual
    music recommendation using emotional allocation modeling,” in Proceedings
    of the 15th International Conference on Multimedia, pp. 553-556, 2007.
    [4] C. M. Chen, M. F. Tsai, J. Y. Liu, and Y. H. Yang, “Using emotional context
    from article for contextual music recommendation,” in ACM international
    conference on Multimedia, pp. 649–652, 2013.
    [5] K. Chen, S. L. Huang, Y. Y. Shih, and Y. J. Chen, “Extended-HowNet: A
    Representational Framework for Concepts,” in International Joint Conference
    on Natural Language Processing, pp. 1-6, 2005.
    [6] J. Choi, J. Song, and Y. Kim, “An Analysis of Music Lyrics by Measuring the
    Distance of Emotion and Sentiment,” in International Conference on Software
    Engineering, Artificial Intelligence, Networking and Parallel/Distributed
    Computing (SNPD), pp. 176-181, 2018.
    [7] T. Cover, and P. Hart, “Nearest neighbor pattern classification,” IEEE
    Transactions on Information Theory, vol. 13, no. 1, pp. 21-27, Jan. 1967.
    [8] S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman, “Indexing
    by Latent Semantic Analysis,” Journal of the Association for Information
    Science and Technology, vol. 41, no. 6, pp. 391-407, 1990.
    [9] M. Fell, and C. Sporleder, “Lyrics-based Analysis and Classification of Music,”
    in International Conference on Computational Linguistics, pp. 620–631, 2014.
    [10] M. Furuya, H.-H. Huang, and K. Kawagoe, “Music classification method based
    on lyrics for music therapy,” in International Database Engineering &
    Applications Symposium, pp. 382–383, 2014.
    [11] J. Gao, H. Yuan, L. Wang, and Y. Qian, “Evaluating Sentiment Similarity of
    Songs Based on Social Media Data,” in International Conference on Service
    Systems and Service Management, pp. 1-6, 2018.
    48
    [12] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support
    vector machines,” IEEE Intelligent Systems and their Applications, vol. 13, no.
    4, pp. 18-28, 1998.
    [13] R. Hossain, M. R. K. R. Sarker, M. Mimo, A. A. Marouf, and B. Pandey,
    “Recommendation Approach of English Songs Title Based on Latent Dirichlet
    Allocation applied on Lyrics,” in International Conference on Electrical,
    Computer and Communication Technologies, pp. 1-4, 2019.
    [14] X. Hu, and J. S. Downie, “When Lyrics Outperform Audio for Music Mood
    Classification: A Feature Analysis,” in International Society for Music
    Information Retrieval Conference, pp. 619-624, 2010.
    [15] P. Juslin, and P. Laukka, “Expression, Perception, and Induction of Musical
    Emotions: A Review and a Questionnaire Study of Everyday Listening,”
    Journal of New Music Research, vol. 33, no. 3, pp. 217 - 238, 2010.
    [16] Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. J. Scott,
    J. A. Speck, and D. Turnbull, “Music emotion recognition: A state of the art
    review,” in International Society for Music Information and Retrieval
    Conference, pp. 255-266, 2010.
    [17] J. H. Lee, and J. S. Downie, “Survey Of Music Information Needs, Uses, And
    Seeking Behaviours: Preliminary Findings,” in International Conference on
    Music Information Retrieval, pp. 1-4, 2004.
    [18] B. Logan, A. Kositsky, and P. Moreno, “Semantic analysis of song lyrics,” in
    IEEE International Conference on Multimedia and Expo, pp. 827-830, 2004.
    [19] L. Lu, D. Liu, and H. Zhang, “Automatic mood detection and tracking of music
    audio signals,” IEEE Transactions on Audio, Speech, and Language Processing,
    vol. 14, no. 1, pp. 5-18, 2006.
    [20] R. Mayer, R. Neumayer, and A. Rauber, “Rhyme and Style Features for Musical
    Genre Classification by Song Lyrics,” in International Conference on Music
    Information Retrieval, pp. 337–342, 2008.
    [21] R. Mayer, R. Neumayer, and A. Rauber, “Combination of audio and lyrics
    features for genre classification in digital audio collections,” in International
    conference on Multimedia, pp. 159-168, 2008.
    [22] S. M. Mohammad, and P. D. Turney, “Crowdsourcing a Word-Emotion
    Association Lexicon,” Computational Intelligence, vol. 29, no. 3, pp. 436–465,
    49
    2013.
    [23] A. Navada, A. N. Ansari, S. Patil, and B. A. Sonkamble, “Overview of use of
    decision tree algorithms in machine learning,” in IEEE Control and System
    Graduate Research Colloquium, pp. 37-42, 2011.
    [24] A. M. Oudenne, and S. E. Chasins, “Identifying the Emotional Polarity of Song
    Lyrics through Natural Language Processing,” Berkeley, Department of
    Electrical Engineering and Computer Sciences, 2010.
    [25] H. Piliang, and R. Kusumaningrum, “Music Emotion Classification Based on
    Indonesian Song Lyrics Using Recurrent Neural Network,” in International
    Conference on Informatics and Computational Sciences, pp. 1-4, 2019.
    [26] D. Ramage, D. Hall, R. Nallapati, and C. Manning, “Labeled LDA,” in
    Conference on Empirical Methods in Natural Language Processing, pp. 248–
    256, 2009.
    [27] R. Rosa, D. Z. RodrÌguez, and G. Bressan, “Music recommendation system
    based on user's sentiments extracted from social networks,” IEEE Transactions
    on Consumer Electronics, vol. 61, no. 3, pp. 359-367, 2015.
    [28] S. Sasaki, K. Yoshii, T. Nakano, M. Goto, and S. Morishima, “LyricsRadar: A
    Lyrics Retrieval System Based on Latent Topics of Lyrics,” in International
    Society for Music Information Retrieval Conference, pp. 585-590, 2014.
    [29] H. Sharma, S. Gupta, Y. Sharma, and A. Purwar, “A New Model for Emotion
    Prediction in Music,” in International Conference on Signal Processing and
    Communication, pp. 156-161, 2020.
    [30] K. Siriket, V. Sa-ing, and S. Khonthapagdee, “Mood classification from Song
    Lyric using Machine Learning,” in 2021 9th International Electrical
    Engineering Congress, pp. 476-478, 2021.
    [31] R. Speer, J. Chin, and C. Havasi, “ConceptNet 5.5: An Open Multilingual Graph
    of General Knowledge,” in AAAI Conference on Artificial Intelligence, pp.
    4444–4451, 2019.
    [32] StudioPros. "Songwriting Tip: Lyrical Themes.,"
    https://studiopros.com/songwriting-tip-lyrical-themes/.
    [33] G. Tzanetakis, and P. Cook, “Musical genre classification of audio signals,”
    IEEE Trans. Speech Audio Process, vol. 10, no. 5, pp. 293-302, 2002.
    [34] C.-C. Wang, “Using Emotion and Scene Features of Lyrics for Social Article
    50
    Music Recommendation,” National Cheng Kung University, College of
    Electrical Engineering & Computer Science, 2017.
    [35] A. B. Warriner, V. Kuperman, and M. Brysbaert, “Norms of valence, arousal,
    and dominance for 13,915 English lemmas,” Behavior Research Methods, vol.
    45, no. 4, pp. 1191-1207, 2013.

    無法下載圖示 全文公開日期 2026/09/30 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE