簡易檢索 / 詳目顯示

研究生: 李承翰
Cheng-Han Li
論文名稱: 具特徵選擇之獨立遞歸神經網路於軸承剩餘使用壽命預測之研究
Independent Recurrent Neural Network with Feature Selection for Remaining Useful Life Prediction for Bearing
指導教授: 郭人介
Ren-Jieh Kuo
口試委員: 歐陽超
Chao Ou-Yang
林希偉
Shi-Woei Lin
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 56
中文關鍵詞: 預測性維修剩餘使用壽命特徵選擇深度學習遞歸神經網路獨立遞歸神經網路基因演算法
外文關鍵詞: Predictive maintenance, Remaining useful life, Feature selection, Deep learning, Recurrent neural network, Independent recurrent neural network, Genetic algoritm
相關次數: 點閱:317下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 如何妥善計畫設備維修一直都是生產線上一個重要的課題,從預防性維修到預測性維修,其目的都在於降低生產線之非預期停產的機率,從而達到降低成本以及工安意外的發生。本研究主要著重在探討預測性維修的方式,在特徵擷取的部分使用了12個時域特徵值、5個頻域特徵值以及8個時頻域特徵值,在特徵選擇的部分使用基因演算法從所有的特徵值中挑選出最適當的特徵值作為模型的輸入資料,而在模型部分使用獨立遞歸神經網路評估設備的健康狀況。本研究提出結合超啟發式演算法及深度學習的技術,將之應用在預測滾珠軸承之剩餘使用壽命,並期望能呈現好的結果。
    為了驗證本研究所提出的整合預測模型是否有效,本研究使用IEEE 2012 PHM Data Challenge competition競賽中所提供之公開資料集,並且與當時的冠軍Sutrisno et al. (2012)、Guo et al. (2017)、遞歸神經網路以及獨立遞歸神經網路共四種方法之結果進行比較。實驗證明,想對於其他四個方法,以啟發式演算法為基礎的獨立遞歸神經網路可獲得較佳的預測結果。


    Planning maintenance of facilities is the main topic for the production line. From preventive maintenance to predictive maintenance, the main purpose is cost down by reducing the chance of the unexpected shot down. Thus, this study intends to propose genetic algorithm-based independent recurrent neural network (GA-based IndRNN), which is a kind for deep learning techniques, and apply it to predict remaining useful life for the ball bearings using vibration signals. Totally, 25 features extracted from three domains are considered. They include 12 features in time domain, 5 features in frequency domain and 8 features in time-frequency domain. In addition, for feature selection, genetic algorithm (GA) is utilized to choose the most adaptive feature subset for the model.
    To validate the proposed method, this study uses the dataset which is provided by IEEE 2012 PHM Data Challenge competition. The result of the proposed method is compared with four methods by the winner of the competition, Sutrisno et al. (2012), Guo et al. (2017), RNN and IndRNN. The experimental results indicate that GA-based IndRNN is able to perform better than the other methods.

    CONTENTS 摘要 I ABSTRACT II 致謝 III CONTENTS IV LIST OF TABLES VI LIST OF FIGURES VII CHAPTER 1 INTRODUCTION 1 1.1 Research Background 1 1.2 Research Objectives 2 1.3 Research Scope and Constrains 2 1.4 Thesis Organization 2 CHAPTER 2 LITERATURE REVIEW 4 2.1 Deep Leaning 4 2.2 Recurrent Neural Network (RNN) 5 2.2.1 Long Short-term Memory (LSTM) 7 2.2.2 Bidirectional Recurrent Neural Network (BRNN) 8 2.2.3 Gated Recurrent Unit (GRU) 10 2.2.4 Independent Recurrent Neural Network (IndRNN) 11 2.3 Genetic Algorithm (GA) 13 2.4 Predictive Maintenance 14 CHAPTER 3 RESEARCH METHODOLOGY 16 3.1 Methodology Framework 16 3.2 Objective Function 17 3.3 Feature Extraction 18 3.4 Feature Selection 19 3.5 Genetic Algorithm-based IndRNN Algorithm 19 CHAPTER 4 EXPERIMENTAL RESULTS 24 4.1 Dataset 24 4.2 Parameter Setting 25 4.3 Computational Results 27 4.4 Feature Selection 28 4.5 Statistical Hypothesis 34 CHAPTER 5 CONCLUSIONS AND FUTURE RESEARCH 36 5.1 Conclusions 36 5.2 Contributions 36 5.3 Future Research 37 REFERENCES 38 Appendix A FEATURE SELECTION METHODS 42 Appendix B COMPUTATIONAL RESULTS 43

    REFERENCES
    Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2), 157-166.
    Bezdek, J. C., Boggavarapu, S., Hall, L. O., & Bensaid, A. (1994, June). Genetic algorithm guided clustering. In Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (pp. 34-39). IEEE.
    Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
    Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
    Davis, J., Edgar, T., Porter, J., Bernaden, J., & Sarli, M. (2012). Smart manufacturing, manufacturing intelligence and demand-dynamic performance. Computers & Chemical Engineering, 47, 145-156.
    Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211.
    Gebraeel, N., Lawley, M., Liu, R., & Parmeshwaran, V. (2004). Residual life predictions from vibration-based degradation signals: a neural network approach. IEEE Transactions on industrial electronics, 51(3), 694-700.
    Gers, F. A., Schmidhuber, J., & Cummins, F. (1999). Learning to forget: Continual prediction with LSTM.
    Gers, F. A., & Schmidhuber, J. (2000). Recurrent nets that time and count. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium (Vol. 3, pp. 189-194). IEEE.
    Gertsbakh, I. (2013). Reliability theory: with applications to preventive maintenance. Springer.
    Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
    Graves, A., & Schmidhuber, J. (2005). Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5-6), 602-610.
    Graves, A., Liwicki, M., Fernández, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2008). A novel connectionist system for unconstrained handwriting recognition. IEEE transactions on pattern analysis and machine intelligence, 31(5), 855-868.
    Graves, A., Mohamed, A. R., & Hinton, G. (2013, May). Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing (pp. 6645-6649). IEEE.
    Grall, A., Dieulle, L., Bérenguer, C., & Roussignol, M. (2002). Continuous-time predictive-maintenance scheduling for a deteriorating system. IEEE transactions on reliability, 51(2), 141-150.
    Guo, L., Li, N., Jia, F., Lei, Y., & Lin, J. (2017). A recurrent neural network based health indicator for remaining useful life prediction of bearings. Neurocomputing, 240, 98-109.
    Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural computation, 18(7), 1527-1554.
    Holland, J. H. (1992). Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press.
    Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
    Hochreiter, S., Bengio, Y., Frasconi, P., & Schmidhuber, J. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies.
    Jin, X., Sun, Y., Que, Z., Wang, Y., & Chow, T. W. (2016). Anomaly detection and fault prognosis for bearings. IEEE Transactions on Instrumentation and Measurement, 65(9), 2046-2054.
    Kalchbrenner, N., & Blunsom, P. (2013). Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (pp. 1700-1709).
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
    LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436.
    Li, S., Li, W., Cook, C., Zhu, C., & Gao, Y. (2018). Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5457-5466).
    Lipton, Z. C., Berkowitz, J., & Elkan, C. (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019.
    Liu, H., & Motoda, H. (2012). Feature selection for knowledge discovery and data mining (Vol. 454). Springer Science & Business Media.
    Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., & Alsaadi, F. E. (2017). A survey of deep neural network architectures and their applications. Neurocomputing, 234, 11-26.
    Maulik, U., & Bandyopadhyay, S. (2000). Genetic algorithm-based clustering technique. Pattern recognition, 33(9), 1455-1465.
    Mikolov, T. (2012). Statistical language models based on neural networks. Presentation at Google, Mountain View, 2nd April, 80.
    Mobley, R. K. (2002). An introduction to predictive maintenance. Elsevier.
    Murthy, C. A., & Chowdhury, N. (1996). In search of optimal clusters using genetic algorithms. Pattern Recognition Letters, 17(8), 825-832.
    Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807-814).
    Nectoux, P., Gouriveau, R., Medjaher, K., Ramasso, E., Chebel-Morello, B., Zerhouni, N., & Varnier, C. (2012, June). PRONOSTIA: An experimental platform for bearings accelerated degradation tests. In IEEE International Conference on Prognostics and Health Management, PHM'12. (pp. 1-8). IEEE Catalog Number: CPF12PHM-CDR.
    Ólafsson, S. (2006). Metaheuristics. Handbooks in operations research and management science, 13, 633-654.
    Park, K. S. (1988). Optimal continuous-wear limit replacement under periodic inspections. IEEE Transactions on reliability, 37(1), 97-102.
    Park, K. S. (1988). Optimal wear-limit replacement with wear-dependent failures. IEEE Transactions on Reliability, 37(3), 293-294.
    Salakhutdinov, R., & Larochelle, H. (2010, March). Efficient learning of deep Boltzmann machines. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 693-700).
    Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117.
    Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681.
    Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484.
    Such, F. P., Madhavan, V., Conti, E., Lehman, J., Stanley, K. O., & Clune, J. (2017). Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567.
    Susto, G. A., Schirru, A., Pampuri, S., McLoone, S., & Beghi, A. (2014). Machine learning for predictive maintenance: A multiple classifier approach. IEEE Transactions on Industrial Informatics, 11(3), 812-820.
    Sutrisno, E., Oh, H., Vasan, A. S. S., & Pecht, M. (2012, June). Estimation of remaining useful life of ball bearings using data driven methodologies. In 2012 IEEE Conference on Prognostics and Health Management (pp. 1-7). IEEE.
    Tsao, E. C. K., Bezdek, J. C., & Pal, N. R. (1994). Fuzzy Kohonen clustering networks. Pattern recognition, 27(5), 757-764.
    Unler, A., & Murat, A. (2010). A discrete particle swarm optimization method for feature selection in binary classification problems. European Journal of Operational Research, 206(3), 528-539.
    Wang, X., Yang, J., Teng, X., Xia, W., & Jensen, R. (2007). Feature selection based on rough sets and particle swarm optimization. Pattern recognition letters, 28(4), 459-471.
    Werbos, P. J. (1990). Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10), 1550-1560.
    Yang, J., & Honavar, V. (1998). Feature subset selection using a genetic algorithm. In Feature extraction, construction and selection (pp. 117-136). Springer, Boston, MA.
    Yang, X. S. (2010). Nature-inspired metaheuristic algorithms. Luniver press.
    Yusta, S. C. (2009). Different metaheuristic strategies to solve the feature selection problem. Pattern Recognition Letters, 30(5), 525-534.

    無法下載圖示 全文公開日期 2024/07/16 (校內網路)
    全文公開日期 2024/07/16 (校外網路)
    全文公開日期 2024/07/16 (國家圖書館:臺灣博碩士論文系統)
    QR CODE