簡易檢索 / 詳目顯示

研究生: 楊聲遠
Sheng-Yuan Yang
論文名稱: 深度學習於時間序列分類之研究
Deep Learning Models for Time Series Classification
指導教授: 王福琨
Fu-Kwun Wang
口試委員: 歐陽超
Chao Ou-Yang
李文義
Wen Yih-Lee
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 50
中文關鍵詞: 深度學習時間序列分類卷積神經網路生成對抗網路
外文關鍵詞: Deep learning, Time series classification, Convolutional neural network, Generative adversarial network
相關次數: 點閱:393下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 時間序列分類是資料探勘與機器學習領域中相當地熱門且有難度的研究問題,且其於現實生活中的實務應用也非常廣泛。然而,相關的研究大多是用傳統演算法或以機器學習的分類法為主,應用深度學習方法的文獻數量相比下非常有限。有鑑於此,本篇研究提出了3種特徵重標類神經網路(SE-Nets)模型搭配三種評估方法來測試其於30 組 UCR time series datasets上的性能,並與其他已知的11種演算法的表現結果做比較。研究結果顯示,我們提出的深度學習模型都有良好的表現,其中的SE-DenseNet更是獲得了最佳的MPCE分數。為了更深入分析實驗結果,本研究採用了基於統計學的評估法來解讀各個演算法之分類表現的相關性,並利用模型中之GAP層的特性結合CAM方法來視覺化模型的分類依據。最後,本研究應用輔助分類器生成對抗網路(AC-GAN) 於Earthquakes的訓練資料集,將產生的部分模擬數據當作單純的訓練資料和用於數據增強,實驗結果顯示當選定的訓練模型為SE-DenseNet且測試資料集都相同的條件下,數據增強後的測試準確度為76.26%,高於原始(70.5%)和模擬訓練資料集(74.82%)的測試準確度,此外其AUC分數亦有所小幅度地提升。


    Time series classification is a popular and challenging research problem among the fields of data mining and machine learning, and there are ranges of applications in our real lives. However, relating researches are mostly done via using traditional algorithms or machine learning classifiers; studies applying approaches of deep learning are relatively few by comparison. Therefore, in this paper, we propose 3 kinds of SE-Nets with 3 different evaluation measures and benchmark them on 30 datasets from the UCR time series datasets. In addition, we compare their results with 11 previously known approaches. The experiment results show that all of our proposed models have decent performances. What’s more, among all 14 methods, our SE-DenseNet got the best MPCE score. To further analyses the experiment results, we adopt some statistical measures to explore the relations of the classification behavior of all 14 methods. We also utilize the property of the GAP layers in our models along with the CAM technique to visualize the contribution region for specific labels. Finally, we apply our AC-GAN on the training set from Earthquakes dataset, and take the synthetic datasets as another training set and for data augmentation usage. Under the condition that the default model for training is SE-DenseNet and testing dataset unchanged, the testing result for the data augmented training set is accuracy of 76.2%, which is higher than training on the original training set (70.5%) and pure synthetic dataset (74.82%). Apart from that, the AUC score is also slightly improved.

    摘要 i Abstract ii Table of Contents iii List of Figures v List of Tables vii Chapter 1. Introduction 1 1.1. Research Background 1 1.2. Motivation 3 1.3. Research Objective 5 1.4. Thesis Structure 6 Chapter 2. Literature Review 7 2.1. Convolutional Neural Network 7 2.1.1. Brief Introduction 7 2.1.2. Convolution Layer 11 2.1.3. Pooling Layer 12 2.1.4. Fully Connected Layer 13 2.2. Residual Neural Network 14 2.3. Inception 16 2.4. Densely Connected Neural Network 18 2.5. Class Activation Mapping 19 2.6. Generative Adversarial Network 20 2.7. Long Short-Term Memory 22 Chapter 3. Resarch Methodology 24 3.1. Research Framework 24 3.2. Time Series Classification with CNN 25 3.3. Squeeze-and-Excitaion Block 27 3.4. Network Architectures 29 3.5. Auxiliary Classifier Generative Adversarial Network 31 Chapter 4. Experiment Results 33 4.1. Descriptions of Datasets and Compared Methods 33 4.2. Experiment Settings 36 4.3. Evaluation Measures 36 4.4. Analyses of Comparison Results 37 4.5. Visualizations 39 4.6. Analyses of Synthetic Data 42 4.6.1. Introduction and Preliminary Preparation 42 4.6.2. Settings of AC-GAN 43 4.6.3. Testing Results 43 Chapter 5. Conclusion and Future works 46 Reference 48

    1. M. G. Baydogan, G. Runger, and E. Tuv, “A bag-of-features framework to classify time series,” IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2796-2802, 2013.
    2. A. Bagnall, J. Lines, J. Hills, and A. Bostrom, “Time-series classification with cote: the collective of transformation-based ensembles, ” IEEE Transactions on Knowledge and Data Engineering, vol 27, no.9, pp. 2522-2535, 2015.
    3. P. Senin and S. Malinchick, “SAX-VSM: Interpretable time series classification using SAX and Vector Space Model,” IEEE 13TH on Data Mining (ICDM), 2013.
    4. Z, Cui, W. Chen, and Y. Chen, “Multi-scale covolutional neural networks for time series classificatoin,” arXiv preprint arXiv: 1603.06995, 2016.
    5. Y. Chen, E. Keogh, B. Hu, N. begum, A. Bagnall, A. Mueen, and G. Batista, “The ucr time series classification archive,” 2016.
    6. Z. Wang and T. Oates, “Imaging time-series to improve classification and imputation,” arXiv preprint arXiv:1506.00327, 2015.
    7. S. Ioffe and C. Szegedy, “Batch normalization: accerlating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
    8. Lecun, Y. et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 86, pp. 2278-2324.
    9. C. Szegedy et al., “Going deeper with convolutions,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
    10. Z. Wang, W. Yan, T. Oates, “ Time Series Classification from Scratch with deep neural networks: A Strong Baseline,” arXiv preprint arXiv:1611.06455, 2016.
    11. B. Zhou et al., “Learning deep features for discriminative localization,” arXiv preprint arXiv:1512.04150, 2015.
    12. S. Hochreiter and J. Schimidhuber, “Long short-term memory,” Neural Computation, vol 9, no.8, pp. 1735-1780, 1998.
    13. K. He, X. Zhang, S. Ren and J. Sun, “ Deep residual learning for image recognition,” arXiv preprint arXiv:1512.03385, 2015.
    14. G. Huang, Z. Liu, Laurens van der Maaten, Killian Q. Weinberger, “ Densely connected convolutioanl networks,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700-4708, 2017.
    15. J. Hu, L. Shen, G. Sun, “Squeeze-and-Excitation networks,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.7132-7141, 2018.
    16. A. Bagnall, J. Lines, A. Bostrom and J. Large and E. Keogh, “ The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data mining and Knowledge Discovery, vol 31, Issue 3, pp. 606-660, 2017.
    17. A. Krizhevsky, I. Suskever, and G. Hinton, “ Imagenet classification with deep convolutional neural networks,” Neural Information Processing Systems, 2012.
    18. JJ. Rodriguez and CJ. Alonsa, “Support Vector Machines of Interval-based Features for time series classification,” Research and development in Intelligence Systems XXI. SGAI 2004. Springer, London, 2004.
    19. L. Maaten, G. Hinton, “ Visualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, pp.2579-2605, 2008.
    20. J. Shlens, “A tutorial on Principal Component Analysis,” arXiv preprint arXiv:1404.1100, 2014.
    21. Y. Chen et al., “Dual Path Networks,” Neural Information Processing Systems 30, pp.4470-4478 , 2017.
    22. C. Esteban, SL. Hyland and G. Rarsch, “Real-valued (Medical) Time series generation with recurrent conditional GANS,” arXiv preprint arXiv:1706.02633, 2017.
    23. JT. Springenberg, A. Dosovitskiy, T Brox, M Riedmiller, “Striving for simplicity: the all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.
    24. K. Fukushima, “ Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Bilogical Cybernetics, 36, pp.193-202, 1980.
    25. N. Hatami, Y. Gavet and J. Debayle, Classification of Time-Series using deep convolutional neural networks”, arXiv preprint arXiv:1710.00886v2, 2017.
    26. Ian Goodfellow et al., “Generative Adversarial Nets,” Neural Information Processing Systems 27, pp.2672-2680, 2014.
    27. Mehdi Mizra and Simon Osindero, “ Conditional generative adervarial nets,” arXiv preprint arXiv:1411.1784, 2014.
    28. A. Odena, C. Olah and J. Shlens, “Conditional image synthesis with Auxiliary Classifier GANs,” ICML,vol.70, pp.2642-2651 , 2017.
    29. Hassan Ismail Fawaz et al., “ Deep learning for time series classification: a review,” Data Mining and Knowledge Discovery, pp.1-47, 2019.
    30. DP. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
    31. E. Jones, T. Oliphant and P. Peterson, “Open source scinetifc tools for Python,” https://github.com/scipy/scipy, 2014.
    32. F. Chollet, “Keras,” https://github.com/keras-team/keras, 2015.
    33. M. Abadi et al., “ Tensorflow: A system for large-scale machine learning,” Tech. rep., Google Brain, 2016.

    無法下載圖示 全文公開日期 2024/07/24 (校內網路)
    全文公開日期 2024/07/24 (校外網路)
    全文公開日期 2024/07/24 (國家圖書館:臺灣博碩士論文系統)
    QR CODE