Basic Search / Detailed Display

Author: 朱國瑀
Kuo-Yu Chu
Thesis Title: 生成式對抗網路於產生不同類別評論之研究
A Study of Generating Reviews of Different Categories via Generative Adversarial Nets
Advisor: 陳正綱
Cheng-Kang Chen
Committee: 賴源正
Yuan-Cheng Lai
Shih-Chao Cha
Degree: 碩士
Department: 管理學院 - 資訊管理系
Department of Information Management
Thesis Publication Year: 2020
Graduation Academic Year: 108
Language: 中文
Pages: 40
Keywords (in Chinese): 網路評論生成對抗網路機器學習
Reference times: Clicks: 301Downloads: 0
School Collection Retrieve National Library Collection Retrieve Error Report

  網路在現代人的生活中已是不可或缺的一部分,也是許多人獲取資訊的重 要管道。有不少消費者在進行消費之前會先在網路上查看店家或商品評論。而顧客評論對於店家及其他消費者皆有很大的影響力,然而對於已消費產品的顧客而言,撰寫評論需花費不少時間,或者是不知如何表達自己的感受。本論文希望能協助消費者撰寫他們的意見,使店家及其他潛在的消費者能了解其對所消費產品的想法。
  為了能達到上述目標,本論文使用生成式對抗網路針對不同的類別或目的 進行評論生成,以生產出通順且貼近消費者想法的評論。本研究模型基於序列生成對抗網路的架構,並在鑑別器加入了分類的概念,訓練生成器能針對不同的類別來生成評論。最後透過在中文及英文的真實資料集上進行實驗,證明透過此方式確實能針對特定類別生成出符合特定類別的評論。

The internet plays an important role in our life and has changed the way people shop. Purchasing goods and services online has become a common practice among many people. Many shoppers consult customer reviews for the purchase they make. Customer reviews are important but many customers don’t leave reviews. To help the customers express their opinions, we use GAN to automatically generate customer reviews of different categories. The experimental results on three datasets show that our model has improved compared to the baseline and can generate reviews in specific categories.

目錄 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機 2 1.3 研究目的 3 1.4 研究架構 3 第二章 文獻探討 5 2.1 類神經網路 5 2.2 人工智慧與機器學習 6 2.3 激勵函數 7 2.4 遞歸神經網路 9 2.5 卷積神經網路 12 2.6 生成對抗網路 13 2.7 優化器 14 第三章 研究模型 17 3.1 資料處理 18 3.2 生成器 19 3.3 鑑別器 20 3.4 對抗 20 第四章 實驗 22 4.1 實驗環境 22 4.2 實驗資料集 23 4.3 超參數設置 24 4.4 評價指標 25 4.5 實驗結果 28 第五章 結論 36 參考文獻 38

[1] Goodfellow I, Pouget-Abadie J, Mirza M. Generative adversarial nets, Advances in neural information processing systems, 2014
[2] V. Sharma, S. Rai, A. Dev. A Comprehensive Study of Artificial Neural Networks, International Journal of Advanced Research in Computer Science and Software Engineering, 2012
[3] A. K. Jain, Jianchang Mao and K. M. Mohiuddin. Artificial neural networks: a tutorial, 1996
[4] Wehle, Hans-Dieter. Machine Learning, Deep Learning, and AI: What’s the Difference, 2017
[5] Tom Young, Devamanyu Hazarika, Soujanya Poria, Erik Cambria. Recent trends in deep learning based natural language processing, 2018
[6] Elman, Jeffrey L. Finding Structure in Time, Cognitive Science, 1990
[7] S. Hochreiter and J. Schmidhuber. Long short-term memory, Neural computation, 1997
[8] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling, 2014.

[9] Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen. Convolutional neural network architectures for matching natural language sentences, Advances in neural information processing systems, 2014
[10] Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient, Thirty-First AAAI Conference on Artificial Intelligence, 2017.
[11] GUILLAUME M. J-B. CHASLOT, MARK H. M. WINANDS, H. JAAP VAN DEN HERIK, JOS W. H. M. UITERWIJK, BRUNO BOUZY. Progressive Strategies for Monte-Carlo Tree Search. New Mathematics and Natural Computation. 2008.
[12] Dennis J. N. J. Soemers, Chiara F. Sironi, Torsten Schuster, Mark H. M. Winands. Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing, IEEE Conference on Computational Intelligence and Games, 2016.
[13] Richard S. Sutton, David McAllester, Satinder Singh, Yishay Mansour, Policy gradient methods for reinforcement learning with function approximation, NIPS, 1999
[14] Sebastian Ruder. An overview of gradient descent optimization algorithms, 2017
[15] John Duchi, Elad Hazan, Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, Journal of Machine Learning Research, 2011
[16] Diederik P. Kingma, Jimmy Ba. Adam: A Method for Stochastic Optimization, 2014
[17] Steven Bird, Edward Loper. NLTK: The Natural Language Toolkit, 2004
[18] Zhilu Zhang, Mert R. Sabuncu. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels, 2018
[19] R. He, J. McAuley. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering, 2016
[20] J. McAuley, C. Targett, J. Shi, A. van den Hengel. Image-based recommendations on styles and substitutes ,SIGIR, 2015
[21] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting, The journal of machine learning research, 2014
[22] Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu. BLEU: a Method for Automatic Evaluation of Machine Translation, 2002

無法下載圖示 Full text public date 2025/07/28 (Intranet public)
Full text public date This full text is not authorized to be published. (Internet public)
Full text public date This full text is not authorized to be published. (National library)