簡易檢索 / 詳目顯示

研究生: Karina Dita Mandasari
Karina Dita Mandasari
論文名稱: 利用基於卷積神經網路的生理訊號預測網站複雜度
Predicting Website Complexity Based on Physiological Signals Using Convolutional Neural Network-Based Method
指導教授: 林久翔
Chiuhsiang Joe Lin
口試委員: 曹譽鐘
Yu-Chung Tsao
林希偉
Shi-Woei Lin
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 92
中文關鍵詞: 網站複雜度網站可用性人類生理信號卷積神經網絡時間序列分類統計分析
外文關鍵詞: Website Complexity, Website Usability, Human Physiological Signals, Convolutional Neural Network, Time Series Classification, Statistical Analysis
相關次數: 點閱:300下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於互聯網使用的增加,電子商務的增長已顯示出更快的速度,人們更喜歡網路資訊而不是紙本,這成為公司開發和改進其網頁的機會。影響用戶滿意度的因素之一是有關網頁的視覺複雜性,網站複雜性的評估變得越來越重要,尤其是關於用戶的感知。為了通過使用生理信號進行客觀評估,本研究比較了一些演算法的性能指標,此方法使用了最先進的卷積神經網絡,因為它在特徵提取方面非常強大。
    卷積神經網路在時間序列領域的各種應用包括語音識別、金融交易預測和人類活動識別。 它的成功應用,激發了本研究使用來自人類生理信號的時間序列數據,來改進基於卷積神經網路算法對網站複雜性的評估。本研究將三種不同複雜性級別(低,中和高)的網站所獲取的人類生理信號以給卷積神經網路學習,藉調整卷積神經網路演算法中的幾個參數以獲得更好的性能,例如內核大小,卷積層數,濾波器數和時間序列長度。最後以性能指標(例如準確性,準確性,召回率,F1得分,損失和AUC)評估卷積神經網路算法的性能。
    由於只有一個可用數據集,因此本研究使用基於用戶的重複k倍交叉驗證來複製時間序列數據。為了比較幾種卷積神經網路演算法,本研究使用單向方差分析和Kruskal-Wallis H檢驗進行統計分析。這項研究的結果表明,卷積神經網路可以用於預測網站的複雜性。在四種演算法 (擴張型,深度可分離,LeNet和ResNet) 之間的訓練時間和丟失概率在統計上有顯著差異LeNet作為最簡單的算法,在損失方面產生了較好的性能。LeNet具有最小的參數,因此在其他算法中的訓練時間也最短。此外,較小的內核大小能更好地用於卷積神經網路算法中,以便從時間序列數據中提取重要信息。調查結果為網站管理員或設計人員提供了預測模型,以便根據其視覺複雜性來評估網站,從而獲得較高的用戶滿意度。


    The growth of e-commerce has been shown faster due to increasing the use of internet. Taiwanese people prefer to webrooming than showrooming. It becomes an opportunity for company to develop and improve their web page. One of the aspects affecting user satisfaction is about the visual complexity of web page. The evaluation of website complexity become more important, especially regarding user perception. To assess objectively by using physiological signal, some algorithms are compared in terms of performance measures. The state-of-art method, Convolutional Neural Network, is used in this study, as it is very powerful for feature extraction.
    Various applications of CNN in time series domain can be found in speech recognition, financial trading prediction, and human activity recognition. Its successful applications motivate this study to use the time series data from human physiological signals to improve the subjective evaluation toward website complexity using CNN-based algorithm. The algorithms are used to predict the level of website complexity in three different levels (low, medium, and high) based on human physiological signals. Several parameters in CNN algorithm can be adjusted to obtain better performance, such as kernel size, the number of convolution layers, the number of filters, and time series length. The performance metrics, such as accuracy, precision, recall, F1-score, loss, and AUC, are used to evaluate the algorithms’ performance.
    This study uses a user-based repeated k-folds cross validation to replicate the time series data since there is only one available dataset. In order to compare several algorithms, one-way ANOVA and Kruskal-Wallis H test are used to do a statistical analysis. The results of this study show that CNN can be used to predict website complexity. Moreover, there are statistically significantly differences in training time and loss probability between four algorithms: Dilated CNN, Depthwise Separable CNN, LeNet, and ResNet. LeNet, as the simplest algorithm, yields better performance in terms of loss. LeNet also has lowest training time among other algorithms since it has smallest parameters. In addition, the small kernel size is better to use in CNN algorithm in order to extract important information from time series data. The research results provide prediction models for website managers or designers to evaluate website in term of its visual complexity in order to obtain high user satisfaction.

    COVER i 摘要 ii ABSTRACT iii ACKNOWLEDGMENT iv TABLE OF CONTENTS v LIST OF TABLES vii LIST OF FIGURES ix LIST OF APPENDIXES x CHAPTER 1 INTRODUCTION 1 1.1 Background and Motivation 1 1.2 Research Statement 3 1.3 Objectives 3 1.4 Limitations 4 1.5 Organization of Thesis 4 CHAPTER 2 LITERATURE REVIEW 6 2.1 Artificial Intelligence 6 2.2 Deep Learning 7 2.3 Convolutional Neural Network 8 2.4 Time Series Application 10 2.4.1. Time Series Clustering 10 2.4.2. Time Series Prediction 11 2.4.3. Time Series Classification 11 2.5 Statistical Comparison of Multiple Algorithms 12 2.6 Research Gap 13 CHAPTER 3 METHODOLOGY 17 3.1 Dataset 18 3.2 Data Pre-processing 19 3.3 Data Preparation 20 3.4 Model Architectures 24 3.4.1. Dilated Convolutional Neural Network 25 3.4.2. Depthwise Separable Convolutional Neural Network 27 3.4.3. LeNet 28 3.4.4. Residual Network (ResNet) 29 3.5 Non-Linearities Hyperparameters 30 3.6 Model Implementation and Training 31 3.7 Model Evaluation 31 3.8 Statistical Analysis 34 CHAPTER 4 RESULTS AND DISCUSSION 38 4.1 Variable Selection 38 4.2 Model Evaluation Results 40 4.3 Performance Comparison Among Algorithms 45 4.4 Discussion 58 CHAPTER 5 CONCLUSION AND FUTURE RESEARCH 65 5.1 Conclusion 65 5.2 Future Research Suggestions 66 REFERENCES 67 APPENDIX 73

    1. Bourlakis, M., Papagiannidis, S., Fox, H. (2008). E-consumer behaviour: Past, present and future trajectories of an evolving retail revolution. International Journal of E-Business Research, 4(3), pp. 64-76.
    2. Yu, T., Wu, G. (2007). Determinants of internet shopping behavior: An application of reasoned behavior theory, International Journal of Management, 24(4), pp. 744-762.
    3. Skeldon, P. (2011). M-commerce: Boost your business with the power of mobile commerce. Richmond: Crimson Publishing.
    4. Nielsen (2014). 87% of Taiwanese Consumers Consider Shopping, Purchasing Online, url: https://www.nielsen.com/tw/en/insights/article/2014/taiwan-consumers-online-shopping-top-5-categories/ (Online accessed on 20 May 2019).
    5. Miller, M. J. (2005). Usability in E-Learning. url: http://www.learningcircuits.org/2005/jan2005/miller.htm (Online accessed on 20 May 2019).
    6. Nadkarni, S., Gupta, R. (2007). A task-based model of perceived website complexity, MIS Quarterly, 31(3), pp. 501-524.
    7. Sears, A., Jacko, J. A. (2007). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications Second Edition. CRC Press.
    8. Abdel-Hamid, O., Mohamed, A., Jiang, H., Deng, L., Penn, G., Yu, D. (2014). Convolutional Neural Networks for Speech Recognition, IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(10), pp. 1533-1545.
    9. Sezer, O. B., Gudelek, M. U., Ozbayoglu, A. M. (2019). Financial time series forecasting with deep learning: A systematic literature review: 2005-2019, Applied Soft Computing, 90, 106181.
    10. Yang, Q., Wu, X. (2006). 10 challenging problems in data mining research, Information technology & decision making, 05(04), pp. 597-604.
    11. Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E. (2017). The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Mining and Knowledge Discovery, 31(3), pp. 606–660.
    12. Fawaz, H. I., Forestier, G., Weber, J., Idoumghar, L., Muller, P. A. (2019). Deep learning for time series classification: a review. Data Mining Knowledge Discovery, 33, pp. 917–963.
    13. Nweke, H.F., Teh, Y.W., Al-garadi, M.A., Alo, U.R. (2018). Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Systems with Applications, 105, pp. 233 – 261.
    14. Oztekin, A., Delen, D., Turkyilmaz, A., Zaim, S. (2013). A machine learning-based usability evaluation method for eLearning systems, Decision Support Systems, 56(1), pp. 63-73.
    15. Contreras, I., Vehi, J. (2018). Artificial Intelligence for Diabetes Management and Decision Support: Literature Review. Journal of medical Internet research, 1438-8871, Vol. 20, Issue 5.
    16. Khayyam, H., Javadi, B., Jalili, M., Jazar, R. (2020). Artificial Intelligence and Internet of Things for Autonomous Vehicles. In: Jazar R., Dai L. (eds) Nonlinear Approaches in Engineering Applications. Springer, Cham.
    17. Szegedy, C., Liu, W., Jia, Y., Sermanet, P, Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A. (2015). Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9.
    18. Sainath, T.N., Mohamed, A.R., Kingsbury, B., Ramabhadran, B. (2013). Deep convolutional neural networks for LVCSR. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8614-8618.
    19. Goldberg, Y. (2016). A primer on neural network models for natural language processing. Artificial Intelligence Research, 57(1), 345-420.
    20. Wang, Z., Yan, W., Oates, T. (2016). Time series classification from scratch with deep neural networks: A strong baseline. In: International Joint Conference on Neural Networks, pp. 1578-1585.
    21. Rajan, D., Thiagarajan, J. (2018). A generative modeling approach to limited channel ecg classification. In: IEEE Engineering in Medicine and Biology Society, p 2571.
    22. Chen, W., Shi, K. (2019). A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network, Neurocomputing, 359, 384-394.
    23. Yang, C. L., Chen, Z. X., Yang, C. Y. (2019). Sensor classification using convolutional neural network by encoding multivariate time series as two-dimensional colored images. Sensors (Basel), 20(1), 168.
    24. Geng, Y., Luo, X. (2018). Cost-sensitive convolution based neural networks for imbalanced time-series classification. ArXiv 1801.04396.
    25. Ordóñez, F., Roggen, D. (2016). Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors, 16, 115, 10.3390/s16010115.
    26. Nogueira, K., Penatti, O.A.B., dos Santos, J. A. (2017). Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern recognition, 61(C), 539-556.
    27. Lopes, A. T., de Aguiar, E., De Souza, A. F., Oliveira-Santos, T. (2017). Facial expression recognition with convolutional neural networks: coping with few data and the training sample order, Pattern recognition, 61, 610-628.
    28. Wang, T., Wu, D.J., Coates, A., Ng, A.Y. (2012). End-to-end text recognition with convolutional neural networks. In: 21st International Conference on Pattern Recognition (ICPR 2012), pp. 3304-3308.
    29. Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M. L., Zweig, G., He, X., Williams, J., Gong, Y., Acero, A. (2013). Recent advances in deep learning for speech research at Microsoft. IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, pp. 8604-8608.
    30. Zeng, M., Nguyen, Le. T., Yu, B., Mengshoel, O. J., Zhu, J., Wu, P., Zhang, J., 2014, Convolutional Neural Networks for Human Activity Recognition using Mobile Sensors, In: 6th International Conference on Mobile Computing, Applications and Services, Austin, TX, pp. 197-205.
    31. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
    32. Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). ImageNet Classification with deep convolutional neural networks. Advances in neural information processing systems, 25(2), pp. 1097-1105.
    33. Simonyan, K., Zesserman, A. (2015). Very deep convolutional networks for large-scale image recognition, In: Proceedings of the International Conference on Learning Representations (ICLR).
    34. He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778.
    35. Gamboa, J. C. B. (2017). Deep learning for time-series analysis. ArXiv, abs/1701.01887, n. pag.
    36. Faust, O., Hagiwara, Y., Hong, T.J., Lih, O.S., Acharya, U.R. (2018). Deep learning for healthcare applications based on physiological signals: A review. Computer Methods and Programs in Biomedicine, 161, 1 – 13.
    37. Zhao, B., Lu, H., Chen, S., Liu, J., Wu, D. (2017). Convolutional neural networks for time series classification. Systems Engineering and Electronics, 28(1), pp. 162–169.
    38. Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, L., Wang, G., Cai, J., Chen, T. (2017). Recent advances in convolutional neural networks, Pattern Recognition, 77, pp. 354-377.
    39. Guennec, A.L., Malinowski, S., Tavenard, R. (2016). Data augmentation for time series classification using convolutional neural networks. In: ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data.
    40. Kalchbrenner, N., Espeholt, L., Simonyan, K., Oord, A. v. d., Graves, A., Kavukcuoglu, K. (2017). Neural Machine Translation in Linear time. CoRR abs/1610.10099.
    41. Sercu, T., Goel, V. (2016). Dense prediction on sequences with time-dilated convolutions for speech recognition. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS) Workshops.
    42. Yazdanbakhsh, O., Dick, S. (2019). Multivariate time series classification using dilated convolutional neural network. Proceedings of the 36th International Conference on Machine Learning, California.
    43. Esling, P., Agon, C. (2012). Time-series data mining. ACM Computing Surveys, 45(1).
    44. Liao, W. (2005). Clustering of time series data - a survey. Pattern Recognition, 38(11), pp. 1857–1874.
    45. Zhong, S., Khoshgoftaar, T. M., Seliya, N. (2007). Clustering-Based Network Intrusion Detection. International Journal of Reliability, Quality and Safety Engineering, 14(2), pp. 169-187.
    46. Yadav, R.N., Kalra, P.K., John, J. (2007). Time Series Prediction with Single Multiplicative Neuron Model. Applied Soft Computing, 7, pp. 1157-1163.
    47. Sorjamaa, A., Hao, J., Reyhani, N., Ji, Y., Lendasse, A. (2007). Methodology for long-term prediction of time series. Neurocomputing, 70, pp. 2861–2869.
    48. Kurbalija, V., Radovanovic, M., Ivanovic, M., Schmidt, D., Trzebiatowski, G. L. v., Burkhard, H.-D., Hinrichs, C. (2014). Time-series analysis in the medical domain: A study of Tacrolimus administration and influence on kidney graft function. Computers in Biology and Medicine, 50, pp. 19-31.
    49. Rajkomar, A., Oren, E., Chen, K., Dai, A.M., Hajaj, N., Liu, P.J., Liu, X., Sun, M., Sundberg, P., Yee, H., Zhang, K., Duggan, G.E., Flores, G., Hardt, M., Irvine, J., Le, Q., Litsch, K., Marcus, J., Mossin, A., Tansuwan, J., Wang, D., Wexler, J., Wilson, J., Ludwig, D., Volchenboum, S.L., Chou, K., Pearson, M., Madabushi, S., Shah, N.H., Butte, A.J., Howell, M., Cui, C., Corrado, G., Dean, J. (2018). Scalable and accurate deep learning for electronic health records. NPJ Digital Medicine, 1:18.
    50. Susto, G. A., Cenedese, A., Terzi, M. (2018) Time-series classification methods: review and applications to power systems data. In: Big Data Application in Power Systems, pp 179-220.
    51. Demsar, J. (2006). Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research, 7, 1-30.
    52. Sibona, C., Brickey, J. (2012). A Statistical Comparison of Classification Algorithms on a Single Data Set. AMCIS.
    53. Vazquez, E.G., Escolano, A.Y., Riano, P.G., Junquera, J.P. (2001). Repeated measures multiple comparison procedures applied to model selection in neural networks. In: Bio-inspired Applications of Connectionism, 6th International Work-Conference on Artificial and Natural Neural Networks, IWANN 2001, Granada, Spain.
    54. Pizarro, J., Guerrero, E., Galindo, P.L. (2002). Multiple comparison procedures applied to model selection. Neurocomputing, 48, 155-173.
    55. Barros, P., Jirak, D., Weber, C., Wermter, S. (2015). Multimodal emotional state recognition using sequence-dependent deep hierarchical features. Neural Networks, 72, 140-151.
    56. Santamaria-Granados, L., Munoz-Organero, M., Ramirez-Gonzalez, G., Abdulhay, E., Arunkumar, N. (2019). Using deep convolutional neural network for emotion detection on a physiological signals dataset (AMIGOS). Special Section on New Trends in Brain Signal Processing and Analysis, 7.
    57. Kanjo, E., Younis, E.M.G., Ang, C.S. (2019). Deep learning analysis of mobile physiological, environmental, and location sensor data for emotion detection. Information Fusion, 49, 46-56.
    58. Li, B., Fu, H. (2018). Real time eye detector with cascaded convolutional neural networks. Hindawi Applied Computational Intelligence and Soft Computing, 2018, 1439312, 8 pages.
    59. Griffin, J., Ramirez, A. (2018). Convolutional Neural Networks for Eye Tracking Algorithm. Stanford University.
    60. Chen, C., Hua, Z., Zhang, R., Liu, G., Wen, W. (2020). Automated arrhythmia classification based on a combination network of CNN and LSTM. Biomedical Signal Processing and Control, 101819.
    61. Geissler, G. L., Zinkhan, G. M., Watson, R. T., Journal, S., Summer, N. (2006). The Influence of Home Page Complexity on Consumer Attention, Attitudes and Purchase Intent. Journal of Advertising, 35 (2), 69–80.
    62. Tobii AB. (2018). Tobii Studio User’s Manual. url: https://www.tobiipro.com/siteassets/tobii-pro/user-manuals/tobii-pro-studio-user-manual.pdf/?v=3.4.5 (Online accessed on 20 May 2020).
    63. Olsen, A. (2012). The Tobii I-VT Fixation Filter: Algorithm description. url: http://www.tobii.com/eye-tracking-research/global/library/white-papers/the-tobii-i-vt-fix ation-filter/ (Online accessed on 20 May 2020).
    64. Lindgaard, G., Fernandes, G., Dudek, C., Brown, J. (2006). Attention web designers: You have 50 milliseconds to make a good first impression! Behaviour & InformationTechnology, 25(2), 115-126.
    65. Ganapathy, N., Swaminathan, R., Deserno, T.M. (2018). Deep learning on 1-D biosignals: A taxonomy-based survey. Yearbook Med. Inf., 27, 98-109.
    66. Zhang, D., Yao, L., Chen, K., Wang, S., Chang, X., Liu, Y. (2019). Making Sense of Spatio-Temporal Preserving Representations for EEG-Based Human Intention Recognition. IEEE Transactions on Cybernetics, 1–12.
    67. Rim, B., Sung, N-J., Min, S., Hong, M. (2020). Deep Learning in Physiological Signal Data: A Survey. Sensors, 20(4), 969.
    68. Myroniv, B., Wu, C.-W., Ren, Y., Christian, A., Bajo, E., Tseng, Y.-C. (2017). Analyzing user emotions via physiology signals. Data Science and Pattern Recognition, 2.
    69. Chollet, F. (2017). Xception: Deep Learning with Depthwise Separable Convolutions, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 1800-1807.
    70. Dahl, G., Sainath, T., Hinton, G. (2013). Improving deep neural networks for LVCSR using rectified linear units and dropout. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8609–8613.
    71. Pan, B., Gay, G. (2004). The determinants of web page viewing behaviour: An eye-tracking study. Proceedings ETRA 2004 - Eye Tracking Research and Applications Symposium, San Antonio, TX., United States, pp. 147-154.
    72. Nakamichi, N., Sakai, M., Shima, K., Hu, J., Matsumoto, K. (2007). WebTracer: A new web usability evaluation environment using gaze point information. Electronic Commerce Research and Applications, 6(1), pp. 63-73.
    73. Matthews, O., Davies, A., Vigo, M., Harper, S. (2020). Unobtrusive arousal detection on the web using pupillary response. International Journal of Human-Computer Studies, 136, 102361.
    74. Ling, C.X., Huang, J., Zhang, H. (2003). AUC: A statistically consistent and more discriminating measure than accuracy. In Ijcai, vol. 3, pp. 519–524.
    75. Bouckaert, R.R (2004). Estimationg replicability of classifier learning experiments. In: Brodley, C.E. (ed.) Proceedings of the 21st international conference on machine learning. ACM.
    76. Khan, A., Sohail, A., Zahoora, U., Qureshi, A.S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review.
    77. Geng, X., Lin, J., Zhao, B., Kong, A., Aly, M.M.S., Chandrasekhar, V. (2019). Hardware-aware softmax approximation for deep neural networks. In: Lecture notes in computer science. Lecture notes in artificial intelligence, Lecture notes in bioinformatics, pp. 107–122.
    78. Zhu, X., Vondrick, C., Fowlkes, C.C., Ramanan, D. (2015). Do We Need More Training Data? International Journal of Computer Vision, 119, pp. 76-92.

    無法下載圖示 全文公開日期 2025/07/04 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE