簡易檢索 / 詳目顯示

研究生: 邱作娜
Ni Putu Novita Puspa Dewi
論文名稱: Forecasting Diesel Power Backup using FOA Optimized GRNN and RNN with Autoencoder
Forecasting Diesel Power Backup using FOA Optimized GRNN and RNN with Autoencoder
指導教授: 呂永和
Yungho Leu
口試委員: 楊維寧
Wei-Ning Yang
陳雲岫
Yun-Shiow Chen
Azhari SN
Azhari SN
Khabib Mustofa
Khabib Mustofa
Mardhani Riasetiawan
Mardhani Riasetiawan
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 105
中文關鍵詞: Electricity forecastingGaussian ProcessFOAGRNNAutoencoderGRULSTM
外文關鍵詞: Electricity forecasting, Gaussian Process, FOAGRNN, Autoencoder, GRU, LSTM
相關次數: 點閱:200下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • In a developing country like Indonesia, maintaining high quality of electricity power supply is important to the continuous development of the country. This study aims to employ advanced technique of deep learning to forecast the diesel backup power output of a power plant in Indonesia. The hyper-parameter optimization in deep learning is time-consuming. In this study, we used Gaussian optimization process and Fruit Fly Optimization Algorithm (FOA) to optimize the hyper-parameter setting. We compared the performance of General Regresssion Neural Network (GRNN) optimized by FOA and both Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) optimized by Gaussian Process (GP). To improve the prediction accuracy, we used the LSTM autoencoder to encode the input sequence. With the encoded input, both LSTM and GRU offered significant higher prediction accuracy than the ones without input encoding. The experimental results showed that the GRU with Autoencoder and Gaussian process optimization offered the least prediction error as compared to the other prediction models.


    In a developing country like Indonesia, maintaining high quality of electricity power supply is important to the continuous development of the country. This study aims to employ advanced technique of deep learning to forecast the diesel backup power output of a power plant in Indonesia. The hyper-parameter optimization in deep learning is time-consuming. In this study, we used Gaussian optimization process and Fruit Fly Optimization Algorithm (FOA) to optimize the hyper-parameter setting. We compared the performance of General Regresssion Neural Network (GRNN) optimized by FOA and both Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) optimized by Gaussian Process (GP). To improve the prediction accuracy, we used the LSTM autoencoder to encode the input sequence. With the encoded input, both LSTM and GRU offered significant higher prediction accuracy than the ones without input encoding. The experimental results showed that the GRU with Autoencoder and Gaussian process optimization offered the least prediction error as compared to the other prediction models.

    ABSTRACT i ACKNOWLEDGEMENT ii TABLE OF CONTENTS iii LIST OF FIGURES v LIST OF TABLES vii LIST OF APPENDIX viii Chapter 1. Introduction 1 Chapter 2. Literature Review 5 2.1. Related Work 5 2.2. Theoretical Foundation 8 2.2.1. Fruit Fly Optimization Algorithm 8 2.2.2. Bayesian Optimization using Gaussian Process 10 2.2.3. General Regression Neural Networks 11 2.2.4. Long Short-Term Memory 13 2.2.5. Gated Recurrent Unit 16 Chapter 3. Methods 18 3.1. Dataset Description 18 3.2. Data Preprocessing 21 3.2.1. Calculation of Line to Line Voltage 21 3.2.2. Min-Max Normalization 22 3.3. FOAGRNN 23 3.4. RNN Models 27 3.4.1. LSTM and GRU Model 27 3.4.2. Autoencoder 29 3.4.3. RNNs Optimization Process 32 3.4.4. Activation Function 34 Chapter 4. Experimental Result 35 4.1. Experimental Setup 35 4.1.2 FOAGRNN and RNN Models 36 4.1.2 SARIMA Model 41 4.2. Experimental Results 43 4.2.1 Forecasting Performance on FOAGRNN 43 4.2.2 Forecasting Performance on RNN models 47 4.2.3 Performance Comparisons of All Models 52 4.2.4 Performance Comparisons on Other Cases 59 4.2.5 Final Comparison with the Traditional Model 66 Chapter 5. Conclusions and Future Research 69 5.1. Result Summary 69 5.2. Limitation 69 5.3. Future Research 70 REFERENCES x APPENDIX xiii

    [1] A. Gensler, J. Henze, B. SickRaabe, and N. Raabe, “Deep Learning for Solar Power Forecasting – An Approach Using Autoencoder and LSTM Neural Networks,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2016), 2016, pp. 2858–2865.
    [2] M. T. Leung, A. Chen, and H. Daouk, “Forecasting Exchange Rates Using General Regression Neural networks,” Comput. Oper. Res., vol. 27, no. 11–12, pp. 1093–1110, 2000.
    [3] Adnyani and Subanar, “General Regression Nerve Network (GRNN) Forecasting Dollar Exchange Rate and Composite Stock Price Index (CSPI),” Factor Exacta, vol. 8, pp. 137–144, 2015.
    [4] R. E. Caraka, H. Yasin, and P. A, “Modeling of General Regression Neural Network (GRNN) on Data Return of Euro 50 Stock Price Index,” Gaussian, vol. 4, pp. 181–192, 2015.
    [5] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1–32, 1997.
    [6] K. Greff, R. K. Srivastava, J. Koutn\’\ik, B. R. Steunebrink, and J. Schmidhuber, “LSTM: Search Space Odyssey,” IEEE Trans. NEURAL NETWORKS Learn. Syst., vol. 28, no. 10, pp. 2222–2232, 2017.
    [7] D. Kryukov, M. Agafonova, and A. Arestova, “Comparison of Regression and Neural Network Approaches to Forecast Daily Power Consumption Kryukov,” in IFOST-2016: Power Engineering and Renewable Energy Technologies Comparison, 2016, no. 4, pp. 247–250.
    [8] O. Gamze, D. Omer F, and Z. Selim, “Forecasting Electricity Consumption with Neural Networks and Support Vector Regression,” in 8th International Strategic Management Conference Forecasting, 2012, vol. 58, pp. 1576–1585.
    [9] A. Marvuglia and A. Messineo, “Using Recurrent Artificial Neural Networks to Forecast Household Electricity Consumption,” Energy Procedia, vol. 14, pp. 45–55, 2012.
    [10] C. Deb, L. S. Eang, J. Yang, and M. Santamouris, “Forecasting Energy Consumption of Institutional Buildings in Singapore,” in 9th International Symposium on Heating, Ventilation and Air Conditioning (ISHVAC) and the 3rd International Conference on Building Energy and Environment (COBEE), 2015, vol. 121, pp. 1734–1740.
    [11] Y. Fu, Z. Li, H. Zhang, and P. Xu, “Using Support Vector Machine to Predict Next Day Electricity Load of Public Buildings with Sub-metering Devices,” 9th Int. Symp. Heating, Vent. Air Cond. 3rd Int. Conf. Build. Energy Environ., vol. 121, pp. 1016–1022, 2015.
    [12] S. Grubwinkler and M. Lienkamp, “Energy Prediction for EVs Using Support Vector Regression Methods Stefan,” in 7th IEEE International Conference Intelligent Systems IS’2014, 2015, vol. 322, pp. 769–780.
    [13] S. Ruliah and R. Rolyadely, “Prediction of Electricity Usage with a Backpropagation Approach,” J. Tek. Inform. dan Sist. Inf., vol. 3, no. 1, p. 466, 2014.
    [14] M. Syafruddin, L. Hakim, and D. Despa, “Linear Regression Method for Predicting Long-Term Electric Energy Needs (Case Study of Lampung Province),” J. Inform. dan Tek. Elektro, no. 1, 2014.
    [15] D. Niu, H. Wang, H. Chen, and Y. Liang, “The General Regression Neural Network Based on the Fruit Fly Optimization Algorithm and the Data Inconsistency Rate for Transmission Line Icing Prediction,” Energies, vol. 10, no. 2066, 2017.
    [16] N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised Learning of Video Representations using LSTMs,” Feb. 2015.
    [17] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Gated Recurrent Neural Networks on Sequence Modeling,” 2014.
    [18] Y. Gao and D. Glowacka, “Deep Gate Recurrent Neural Network,” in JMLR: Workshop and Conference Proceedings 63, 2016, pp. 350–365.
    [19] W. Pan, “Knowledge-Based Systems A new Fruit Fly Optimization Algorithm : Taking the financial distress model as an example,” Knowledge-Based Syst., vol. 26, pp. 69–74, 2012.
    [20] E. Brochu, V. M. Cora, and N. De Freitas, “A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modelling and Hierachical Reinforcement Learning.”
    [21] P. I. Frazier, “A Tutorial on Bayesian Optimization,” no. Section 5, pp. 1–22, 2018.
    [22] M. Krasser, “Bayesian optimization,” 2018. [Online]. Available: http://krasserm.github.io/2018/03/21/bayesian-optimization/.
    [23] J. Gonzalvez, E. Lezmi, T. Roncalli, and J. Xu, “Financial Applications of Gaussian Processes and Bayesian Optimization,” pp. 1–42, 2019.
    [24] M. Krasser, “Gaussian processes.” [Online]. Available: http://krasserm.github.io/2018/03/19/gaussian-processes/.
    [25] D. F. Specht, “General Regression Neural Network ( GRNN ),” in General Regression Neural Network ( GRNN ), pp. 42–60.
    [26] C. Olah, “Understanding LSTM Networks,” 2015. [Online]. Available: http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
    [27] A. Gulli and S. Pal, Deep Learning with Keras. Birmingham: Packt Publishing, 2017.
    [28] R. Jozefowicz, W. Zaremba, and I. Sutskever, “An Empirical Exploration of Recurrent Network Architectures Rafal,” JMLR W&CP, vol. 37, 2015.
    [29] J. Brownlee, “A Gentle Introduction to LSTM Autoencoders,” 2018. [Online]. Available: https://machinelearningmastery.com/lstm-autoencoders/.
    [30] “Skopt module.” [Online]. Available: https://scikit-optimize.github.io/.
    [31] Keras, “Keras: The Python Deep Learning library.” [Online]. Available: https://keras.io/.
    [32] Neupy, “Neupy-Neural Networks in Python.” [Online]. Available: http://neupy.com/apidocs/neupy.algorithms.rbfn.grnn.html#neupy.algorithms.rbfn.grnn.GRNN.
    [33] J. Brownlee, “What is the Difference Between a Batch and an Epoch in a Neural Network?,” 2018. [Online]. Available: https://machinelearningmastery.com/difference-between-a-batch-and-an-epoch/.
    [34] D. P. Kingma and J. L. Ba, “Adam: A Method for Stochastic Optimization,” in International Conference on Learning Representations (ICLR), 2015, pp. 1–15.
    [35] J. Brownlee, “Gentle Introduction to the Adam Optimization Algorithm for Deep Learning,” 2017. [Online]. Available: https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/.
    [36] H. Zulkifli, “Understanding Learning Rates and How It Improves Performance in Deep Learning,” 2018. [Online]. Available: https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10.
    [37] L. N. Smith, “Cyclical learning rates for training neural networks,” in Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, 2017, no. April, pp. 464–472.
    [38] R. J. Hyndman and G. Athanasopoulos, Forecasting: principles and practice, 2nd edition. Melbourne, Australia: OTexts, 2018.
    [39] Python, “Pyramid ARIMA.” [Online]. Available: https://pypi.org/project/pmdarima/.

    QR CODE