簡易檢索 / 詳目顯示

研究生: 柳志民
Jhih-Min Liou
論文名稱: 類神經網路應用於歷時模擬之初步研究
A Preliminary Study on Time Series Simulation using Neural Networks
指導教授: 陳瑞華
Rwey-Hua Cherng
口試委員: 鄭蘩
Van Jeng
林主潔
Chu-Chieh Jay Lin
黃慶東
Ching-Tung Huang
學位類別: 碩士
Master
系所名稱: 工程學院 - 營建工程系
Department of Civil and Construction Engineering
論文出版年: 2005
畢業學年度: 93
語文別: 中文
論文頁數: 146
中文關鍵詞: 時間歷時模擬 類神經網路
外文關鍵詞: time series
相關次數: 點閱:223下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 傳統的歷時模擬方式通常牽涉複雜的數學運算;本文嘗試以類神經網路藉由學習來模擬具不確定性的近常態歷時資料,提供另一種歷時模擬的可能方法。首先透過組合演算法與最適化網路結構建立符合近常態分佈歷時資料的複製類神經網路(RNN),獲得歷時資料的壓縮網路(RNNl)與解壓縮網路(RNNu),並求得歷時的壓縮資料。接著透過歷時的壓縮資料分別建立倒傳遞類神經網路(BNN)與隨機類神經網路(SNN),用以模擬壓縮的歷時資料。再利用RNNl、BNN與RNNu建立歷時產生器類神經網路(TGNN1)。同樣地,RNNl、SNN與RNNu組成另一個歷時產生器類神經網路(TGNN2)。建構出的兩個歷時產生器類神經網路,分別作為近常態分佈歷時資料的歷時產生器。訓練完成的TGNN1給予一筆歷時資料多次,可產生多筆歷時資料。TGNN2訓練完成後給予一筆歷時資料,則至少可獲得與輸入向量數相同筆數的歷時資料。TGNN2能夠產生與輸入的歷時資料相同頻率內涵的歷時資料,而且產生的歷時資料在前四個統計量與單邊頻譜密度函數上均具有相當不錯的準確性。TGNN1模擬結果的精度不如TGNN2來的佳。


    In the past, engineers have used many different methods for modeling time series. These methods have many kinds of complicated nonlinear mathematics model. A lot of parameters of simulation model must be established via some procedures. When neural networks generate time series, which do not need define complicated mathematical model or equation. This article applies a suit of technique and mechanism of neural networks, which provides another time series modeling method, to generate specific time series data. In this article chooses nearly normal time series simulation as the example. Show a two-stage approach. In the first stage, a trained replicator neural network is used as a data compression tool. The replicator neural network compresses the vector of the discrete time series data to vectors of much smaller dimension. In the second stage, trained stochastic neural networks or trained back propagation neural networks learns to relate the compressed time series data to another compressed time series data. Then, the replicator neural network was combined with back propagation neural network or stochastic neural network, which trained to learn to associate the time series data with another time series data, to form TGNN1 and TGNN2. That two time series generator neural networks are becoming nearly normal time series generator. Finally, this article uses TGNN1 and TGNN2 to simulate time series data and analyses results.

    表目錄 v 圖目錄 vi 第一章 緒論 1 1.1研究動機 1 1.2本文架構 2 第二章 能量頻譜密度分析 4 2.1前言 4 2.2隨機過程與頻譜 5 2.3人造歷時模擬 7 2.4頻譜估計法 8 2.4.1各種參數法下之方程式 8 2.4.2參數法之性質 13 2.4.3 Burg估計法 14 2.4.4 最佳模式階數 14 第三章 類神經網路文獻回顧 19 3.1前言 19 3.1.1類神經網路概論 20 3.1.2類神經網路研究發展 24 3.2倒傳遞類神經網路 26 3.2.1多層前傳遞類神經網路 26 3.2.2學習演算法與適應性網路結構之決定 29 3.2.2.1快速倒傳遞學習演算法 29 3.2.2.2局部適應性學習速率演算法 31 3.2.2.3一致性連結權重修正方法 34 3.2.2.4最適化網路結構決定方法 36 3.3隨機類神經網路 38 3.3.1前言 39 3.3.2各類隨機類神經網路 39 3.3.2.1波茲曼機器 40 3.3.2.2高斯機器 41 3.3.2.3隨機神經元S2 42 第四章 歷時產生器類神經網路之建構 45 4.1前言 45 4.2運用複製類神經網路壓縮歷時資料 46 4.2.1複製類神經網路理論 46 4.2.2複製類神經網路架構 47 4.3運用倒傳遞類神經網路模擬壓縮之歷時資料 49 4.4運用隨機類神經網路模擬壓縮之歷時資料 50 4.5歷時產生器類神經網路之建立 51 第五章 單變數近常態分佈歷時之模擬 54 5.1前言 54 5.2 範例資料 54 5.3 近常態分佈歷時資料之壓縮 55 5.3.1複製類神經網路架構 55 5.3.2複製類神經網路訓練 57 5.3.3複製類神經網路表現 59 5.3.2.1訓練資料驗證 60 5.3.2.2測試資料驗證 61 5.4 TGNN1近常態分佈歷時之模擬 62 5.4.1 TGNN1架構 62 5.4.2 TGNN1訓練與表現 64 5.5 TGNN2近常態分佈歷時之模擬 66 5.5.1 TGNN2架構 66 5.5.2 TGNN2訓練 68 5.5.3 TGNN2表現 70 5.5.3.1訓練資料驗證 70 5.5.3.2測試資料驗證 72 第六章 結論與建議 73 6.1結論 73 6.2建議 74 參考文獻 76

    Ackley, D. H., Hinton, G. E. and Sejnoewski, T. J. (1985), A Learning Algorithm for Boltzmann Machines, Cognitive Science 9, 147-169.

    Akaike, H. (1970), Statistical Predictor Identification, Ann. Inst. Statist. Math., 22, 203-217.

    Akaike, H. (1974), A New Look at the Statistical Model Identification, IEEE Trans. Autom. Control, AC19, 716-723, Dec.

    Akiyama, Y., Yamashita, A., Kajiura, M. and Aiso, H. (1989), Combinatorial Optimization with Gaussian machines, IEEE IJCNN 1, 533-540.

    Ash, T. (1989), Dynamic Node Creation in Backpropagation Networks, ICS Report 8901, Institude for Cognitive Science, University of California, San Diego, La Jolla.

    Burg, J. P. (1975), Maximum Entropy Spectral Analysis, Ph.D. dissertation, Stanford University, May.

    Cichocki, A. and Unbehauen, R. (1993), Neural networks for optimization and signal processing, John Wiley & Sons, Chichester.

    Fahlman, S. E. (1988), Faster-learning Variations on Back-propagation: An Empirical Study, Proceeding of 1988 Connectionist Models Summer School, Morgan Kaufmann Los Altos CA, 38-51.

    Gelenbe, E. (1989), Random Neural Networks with Negtive and Positive Signals and Product from Solution, Neural Computation 1, 502-510.

    Ghaboussi, J. (1993), An Overview of the Potential Applications of Neural Networks in Civil Engineering, Proceedings, ASCE Structures Congress ’93, Irvine, California.

    Ghaboussi, J. (1994), Some Applications of Neural Networks in Structural Engineering, Proceedings, Structures Congress, ASCE, Atlanta, GA.

    Ghaboussi, J., Banan, M. R. and Florom, R. L. (1994), Application of Neural Networks in Acoustic Wayside Fault Detection in Railway Engineering, Proceedings of World Congress on Railway Research, Paris, France.

    Ghaboussi, J., Garret, J. H., Jr. and Wu, X. (1990), Material Modeling with Neural Networks, Proceedings of the International Conference on Numerical Methods in Engineering: Theory and Application, Swansea, U. K., 701-717.

    Ghaboussi, J., Garret, J. H., Jr. and Wu, X. (1990), Knowledge-based Modeling of Material Behavior with Neural Networks, Journal of Engineering Mechanics Division, ASCE 117(1), 132-153.

    Ghaboussi, J. and Lin, C. C. J. (1998), New method of Generating Spectrum Compatible Accelerograms using Neural Networks, Earthquake Engng. Struct. Dyn. 27(4), 377-396.

    Grossberg, S. (1976), Adaptive Pattern Classification and Universal Recording: Part 1. Parallel Development and Coding of Natural Features Detectors, Biological Cybernetics 23, 121-134.

    Hecht Nielsen, R. (1995), Replicator Neural Networks for Optimal Source Coding, Science 269, 1860-1863.

    Hecht Nielsen, R. (1996), Data manifolds, natural coordinates, replicator neural networks, and optimal source coding, ICONIP-96.

    Hertz, J., Krogh, A. and Palmer, R. G. (1991), Introduction to the theory of neural computation, Addison-Wesley, Redwood City, Caligornia.

    Hinton, G. E. and Sejnowski, T. J. (1983), Optimal Perceptural Infernce, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 448-453.

    Hinton, G. E., Sejnowski, T. J. and Ackley, D. H. (1984), Boltzmann Machines: Constraint Satisfaction Networks That Learn, Technical Report No. CMU-CS-84-110, Department of Computer Science, Carnegie, Mellon University, Pittsburg, PA.
    Hinton, G. E. and Sejnowski, T. J. (1986), Learning and Relearning in Boltzman Machines, Parallel Distributed Processing, 1, Chap. 7.

    Hopfield, J. J. (1982), Neural Networks and Physical Systems with Emergent Collective Computational Abilities, Proceeding of the National Academy of 79, 2554-2558.

    Hornik, K., Stinchcomebe, M. and White, H. (1989), Multilayer Feedforward Networks Ara Universal Approximators, Neural Networks 2, 359-366.

    Jacobs, R. A. (1988), Increased Rates of Convergence Through Learning Rate Adaptation, Neural Networks 1, 295-307.

    Joghataie, A., Ghaboussi, J. and Wu, X. (1995), Learning and architecture determination through automatic node generation, International Conference on Artificial Neural Networks in Engineering, ANNIE ’95, St. Louis, Missouri.

    Kesten, H. (1958), Accelerated Stochastic Approximation, Annals of Mathematical Statistics 29, 41-59.

    Kohonen, T. (1974), An Adaptive Associative Memory Principle, IEEE Transactions C-23, 444-445.

    Kohonen, T. (1989), Self-Organization and Associative Memory (3rd ed.), Berlin: Springer-Verlag.

    Lin, C. C. J. (1999), A neural network based methodology for generating spectrum compatible earthquake accelerograms, PhD thesis, Dept. of Civil and Environmental Engineering, Univ. of Illinois at Urbana Champaign, Urbana, Illinois.

    Lin, C. C. J. and Ghaboussi, J. (1997a), A New Method of Generating Artificial Earth-quake Accelerograms Using Neural Networks, ICCCBE VII, Souel, Korea.

    Lin, C. C. J. and Ghaboussi, J. (1997b), Replicator Neural Network in Generating Artificial Earthquake Accelerograms, Pro., ANNIE ‘97, St. Louis, Missouri, 377-396.

    Lin, C. C. J. and Ghaboussi, J. (1999), Stochastic Neural Networks in Generating Multiple Artifical Earthquake Accelerograms, Proc., ANNIE ’99, Louis, Missouri, 1061-1066.

    More, Anurag and Deo, M.C.( 2002), Forecasting wind with neural networks, Marine Structures, 16(1), January/February, 2003, 35-49

    Parzen, E. (1976), An Approach to Time Series Modeling and Forecasting Illustrated by Hourly Electricity Demands, Tech. Rep. 37, Statistical Science Division, State University of New York, Jan.

    Plaut, D., Nowlan, S. and Hinton, G. (1986), Experiments on Learning by Back Propagation, Technical Report CMU-CS-86-126, Deparment of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

    Rissanen, J. (1978), Modeling by Shortest Data Description, Automatica, 14, 465-471.

    Rumelhart, D. E. and Zipser, D. (1985), Feature Discovery by Competitive Learning, Cognitive Science 9, 75-112.

    Rumelhart, D. E. and McClelland, J. L. (Eds), (1986), Parallel Distributed Processing: Explorations in the Microstructure of Cognition; Vol. 1: Foundations, The MIT Press,Cambrige, MA.

    Rumelhart, D. E., Hinton, G. E. and McClelland, J. L., (1986a), A General Framework for Parallel Distributed Processing, Parallel Distributed Processing, Vol. 1: Foundation, The MIT Press, MA.

    Rumelhart, D. E., Hinton, G. E. and Williams, R. J. (1986b), Learning Internal Representations by Error Propagation, Parallel Distributed Processing, Vol. 1: Foundations, The MIT Press, MA.

    Sardis, G. N. (1970), Learning Applied to Successive Approximation Algorithms, IEEE Transactions on Systems Science and Cybernetics, SSC-6, 97-103.

    Sfetsos, A. ( 1999), A comparison of various forecasting techniques applied to mean hourly wind speed time series, Renewable Energy, 21(1), Sep, 23-35.

    Shawe-Taylor, J., Jeavons, P. and van Daalen, M. (1991), Probabilistic Bit Stream Neural Clip: Theory, Connection Science 3(3), 317-328.

    Silva, W. J. (1991), Global Characteristics and Site Geometry, Proceeding: NSF/EPRI Workshop on Dynamic Soil Properties and Site Characterization. Electric Power Research Institute, EPRI NP-7337.

    Specch, L. F. (1990), Probability Neural Networks, Neural Networks 3, 109-118.

    Steven, M. Kay (1988), Modern Spectral Estimation: Theory & Application.

    Wu, X. (1991), Neural Network-Based Material Modeling, Ph.D. Thesis, University of Illinois at Urbana-Champaign, Urbana, Illinois.

    Wu, X. and Ghaboussi, J. (1995), Neural Network Based Material Modeling, Cicil Engineering Studies, SRS 599, University of Illinois, Urbana, Illinois.

    Yang, C. Y. (1985), Random Vibration of Structures.

    陳鶴修,「台灣地區風速頻譜之初步研究」,台灣科技大學營建工程研究所碩士論文,陳瑞華教授指導(1998)。

    張斐章,張麗秋與黃浩倫,類神經網路理論與實務,東華書局,民國 92 年 9 月。

    葉怡成,類神經網路模式應用與實作,儒林圖書,民國 89年 7 月。

    黃俊銘,數值方法-使用Matlab程式語言,全華科技圖書,民國90年10月。

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE