簡易檢索 / 詳目顯示

研究生: 林家強
Chia-Chiang Lin
論文名稱: 倒傳遞類神經網路學習演算法之變動寬度動量項設計及輸出類型於分類問題之研究
Study on Variable-Width Momentum and Output Types in the Backpropagation Learning Algorithm in Neural Networks for Classification Problems
指導教授: 蘇順豐
Shun-Feng Su
口試委員: 姚立德
none
張志永
none
王文智
none
楊谷洋
none
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2005
畢業學年度: 93
語文別: 英文
論文頁數: 45
中文關鍵詞: 類神經倒傳遞動量
外文關鍵詞: momentum, neural, backpropagation
相關次數: 點閱:207下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 以倒傳式學習演算法作為訓練類神經網路的方法有兩個重要的議題,分別是其收斂速度及演化能力。未成熟飽和的問題常是造成倒傳式學習演算法的收斂速度變慢的重要原因之一。而加入一動量項於倒傳式學習演算法中是一個加快收斂速度之常用的做法,但是其又與未成熟飽和問題的產生有相關性。未成熟飽和的問題導致誤差停滯在現有值,進而減低了神經網路的學習效率。許多學者致力於這問題,且此問題之詳細機制及條件已被全盤分析。而另一方面,類神經網路的演化能力決定了該網路在學習過後的品質。在論文中,一個新架構的動量項:變動寬度動量項被提出,此乃基於預防未成熟飽和問題的發生並同時保持原本動量項的優點而設計的。而實驗以分類的問題做分析,而其結果確有收斂速度上的優越性。再者,我們探索了雙彎曲函數及單彎曲函數作為倒傳式學習演算法的啟動函數之演化能力優劣性。我們觀察到了此二函數隱含了投票概念於此。而實驗結果也確實在分類錯誤率上與我們的觀察有相符的結果。


    Two often considered issues for neural networks trained by the backpropagation (BP) algorithm are the convergence rate and the generalization ability. The slow convergence rate is a severely drawback of BP, and the problem of premature saturation is the main reason of the slow convergence. Adding a momentum term to BP is a common technique to speedup the convergence, but it may also stimulate the occurrence of premature saturation. Premature saturation causes the error to be trapped at the current value and decreases the learning efficiency of neural networks. Many efforts had been put on this problem, and detailed mechanisms and conditions had been fully analyzed. In this thesis, a new structure of momentum, variable-width momentum, is designed to prevent premature saturation and maintain the advantage of momentum. The simulation results illustrate the superiority of the proposed variable-width momentum on convergence rates. Moreover, we discuss the generalization abilities while using the bipolar sigmoid function or using the unipolar sigmoid function as activation function of BP on classification problems. Bipolar possess the concept of “for” and “against”, which then can be used for voting. In our simulation results, we found that such an approach can have better classification ability than that of using the unipolar sigmoid function.

    摘要 i Abstract ii 誌謝 iii Contents iv List of Tables vi List of Figures vii Chapter 1 Introduction 1.1 Artificial Intelligence 1 1.2 Motivation and Contribution 2 1.3 Organization of the thesis 3 Chapter 2 Introduction of Artificial Neural Network and Backpropagation Algorithm 2.1 Introduction of Artificial Neural Network 4 2.2 Introduction of Backpropagation Algorithm 6 2.3 Introduction of Momentum term in the Backpropagation Algorithm 10 2.4 Preparations of the Backpropagation Algorithm 11 2.5 Problems of the Backpropagation Algorithm 13 2.6 Phenomenon of Premature Saturation 15 2.7 Four necessary conditions and three stages of Premature Saturation 17 Chapter 3 Variable-width momentum and investigation of output type in backpropagation algorithm 3.1 The relationship between momentum and premature saturation 20 3.2 Introduction of variable-width momentum 21 3.3 Algorithm of variable-width momentum 22 3.4 Analysis of variable-width momentum 24 3.5 Investigation of output type in backpropagation algorithm 26 Chapter 4 Simulation Results 29 Chapter 5 Conclusions and Future Work 41 Reference 42 作者簡介 45

    [1] D. E. Rumelhart, G. E. Hinton and R. J. Williams, “Learning representations by back-propagation errors,’’ Nature, vol. 323, pp. 533-536, 1986.
    [2] Y. Lee, S.H. Oh and M. W. Kim, ’’An analysis of premature saturation in back propagation learning,’’ Neural Networks, vol. 6, pp. 719-728, 1992.
    [3] Y. Lee, S.H. Oh and M. W. Kim, “The effect of initial weights on premature saturation in back propagation learning,’’ Proc. of the International Joint Conference on Neural Networks, vol. 2, pp. 765-770, 1991.
    [4] J. E. Vitela and J. Reifman, “The cause for premature saturation with backpropagation Training,’’ IEEE International Conference on Neural Networks, vol. 2, pp. 1449-1453, 1994.
    [5] J. E. Vitela and J. Reifman, “Premature saturation in backpropagation neural networks: mechanism and necessary conditions,’’ Neural Networks, vol. 10, no. 4, pp. 721-735, 1997.
    [6] Y. Hirose, K. Yamashita and S. Hijiya, “Back-propagation algorithm which varies the number of hidden units,’’ Neural Networks, vol. 4, pp. 61-66, 1991.
    [7] M. Jiang, B. Deng, B. Wang and B. Zhang, “A fast learning algorithm of neural networks by changing error functions,” IEEE Int. Conf. Neural Networks & Signal Processing, 2003.
    [8] R. Parekh, K. Balakrishnan and V. Honavar, “An empirical comparison of flat-spot elimination techniques in back-propagation networks,’’ Proc. Third Workshop on Neural Networks, Auburn, Alabama, 1992.
    [9] E. Istook and T. Martinez, “Improved backpropagation learning in neural networks with windowed momentum,’’ International Journal of Neural Systems, vol.12, pp. 303-318, 2002.
    [10] H. M. Lee, T. C. Huang and C. M. Chen, “Learning efficiency improvement of back propagation algorithm by error saturation prevention method,’’ Neurocomputing, vol.41, pp. 125-143, 2001.
    [11] V. V. Phansalkar and P. S. Sastry, “Analysis of back-propagation algorithm with momentum,” IEEE Trans. on Neural Networks, vol.5, no.3, pp. 505-506, May, 1994.
    [12] S. C. Ng, C. C. Cheung and S. J. Leung, “Magnified gradient function with deterministic weight modification in adaptive learning,” IEEE Trans. on Neural Networks, vol.15, no.6, May, 2004.
    [13] E. N. Miranda, “on the capacity of multilayer neural networks trained by backpropagation,” International Journal of Neural System, vol.10, no4, 2000.
    [14] M. Hagiwara, “Theoretical derivation of momentum term in back-propagation,” Neural Networks, vol.1, pp. 682-686, May, 1992.
    [15] M. Negnevitsky and M. Ringrose, “Accelerated learning in multi-layer neural networks,” Neural Information Processing,. Proceedings, vol.3, pp.16-20, 1999.
    [16] V. K. Asari, “Training of a feedforward multiple-valued neural network by error backpropagation with a multilevel threshold function,” IEEE Trans. on Neural Networks, vol.12, no.6, November, 2001.
    [17] M. Pfister and R. Rojas, “Speeding-up backpropagation – A compareison of orthogonal techniques,” Proceedings of IJCNN, August, 1993.
    [18] S. H. Oh and Y. Lee, “A modified error function to improve the error back-propagation algorithm for multi-layer perceptrons,” ETRI Journal, vol.17, No.1, April, 1995.
    [19] D. J. Swanston, J. M. Bishop and R.J. Mitchell, “Simple adaptive momentum: New algorithm for training multilayer perceptrons,” Electroncs Letters, vol.30, no.18, September, 1994.
    [20] A. Gupta and S. M. Lam, “Weight decay backpropagation for noisy data” Neural Networks, vol.11, pp. 1127-1137, April, 1998
    [21] R. S. Sexton and R. E. Dorsey, “Reliable classification using neural networks: a genetic algorithm and backpropagation comparison,” Decision Support Systems, vol.30, pp.11-22, 2000.
    [22] R. S. Sexton and J. N. D. Gupta, “Comparative evaluation of genetic algorithm and backpropagation for training neural networks,” Information Sciences, vol.129, pp.45-59, 2000.
    [23] K. Hornik, M. Stinchcombe and H. White, ’’Multilayer feedforward networks are unoversal approximators,’’ Neural Networks, vol.2, no.5, pp. 359-366, 1989.
    [24] K. Balakrishnan and V. Honavar, ’’Improving convergence of back-propagation by handling flat-spots in output layer,’’ Proc. of ICANN, vol. 2, pp. 1003-1009, 1992.

    無法下載圖示
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE