Basic Search / Detailed Display

Author: 連儷晴
Li-Ching Lien
Thesis Title: 以遺傳演算法結合卷積神經網路搜尋最佳卷積核作影像分類之研究
A Study on Combining GA and CNN to Search Optimized Filter for Image Classification
Advisor: 楊傳凱
Chuan-Kai Yang
Committee: 林伯慎
Bor-Shen Lin
賴源正
Yuan-Cheng Lai
Degree: 碩士
Master
Department: 管理學院 - 資訊管理系
Department of Information Management
Thesis Publication Year: 2020
Graduation Academic Year: 108
Language: 中文
Pages: 125
Keywords (in Chinese): 機器學習遺傳演算法卷積神經網路卷積核最佳化二維搜尋影像分類
Keywords (in other languages): Machine Learning, Genetic Algorithm, Convolution Neural Networks, Filter, Optimization, Two-dimensional searching, Image Classification
Reference times: Clicks: 234Downloads: 0
Share:
School Collection Retrieve National Library Collection Retrieve Error Report
  • 使用卷積神經網路作影像分類時,卷積核對圖像特徵擷取,有著舉足輕重之
    影響,本文提出利用遺傳演算法結合卷積神經網路,搜尋對於不同的影像資料,
    具有最佳特徵擷取能力的卷積核,以進行影像分類。

    本文實驗首先藉由遺傳演算法,產生不同形狀與大小的卷積核個體後,利用
    適應函數結合卷積神經網路,用以評估各個體之卷積核對於影像資料的分類準確
    率,再以適合度進行卷積核篩選,持續進行交配、突變演化,達成擇優演化最佳
    卷積核之目的,並自動儲存到目前為止,所搜尋到的最佳卷積核模型,直到滿足
    停止的條件時結束。

    在實驗影像資料庫中,本文方法以遺傳演算法結合卷積神經網路,搜尋最佳
    卷積核作影像分類,搜尋到之最佳卷積核,確實會優於傳統人工依憑經驗,所挑
    選的正方形卷積核;據實驗結果顯示,卷積核的形狀與面積,皆會造成卷積神經
    網路辨識之準確率差異;同時從實驗結果中發現,卷積神經網路與遺傳演算法結
    合後,可以對不同影像資料庫,搜尋到具有較佳特徵擷取能力的卷積核,提升卷
    積神經網路影像辨識之準確率;本文使用遺傳演算法結合卷積神經網路之方法,
    對於較為複雜的影像資料或龐大數據資料,在優化卷積核的效能可更為明確,用
    以達到節省人工調整參數的時間,並有效提升卷積神經網路之辨識準確率。


    Filters of convolutional neural network have important impact on extracting features for image classification. This paper proposes a method which combines convolutional neural network (CNN) and genetic algorithm(GA), so as to search the optimized filter in various image databases for image classification.

    This paper utilizes GA to generate individuals which contain filters of different shapes and sizes. Then GA combines fitness function with CNN together, in order to evaluate the fitness of each individual’s performance for image classification. According to the fitness, GA selects the individuals for the next generation, and it can achieve the purpose of optimizing filters. In the process, GA also picks the best model with optimized filters to store. The evolution process will stop, if the stop condition is satisfied.

    In the experiment image databases, this paper uses GA to be combined with CNN to search for the optimized filter for image classification. Human usually rely on experience to adjust the filters, whereas the filters searched by this paper are certainly better than the square filters adjusted by human in most experiments. Outcomes show that the shapes and the sizes of the filters indeed cause the difference of accuracy for image classification. The results, such as combining GA and CNN for image classification, show that it can find the filters with stronger capability to extract features for different image databases, and increase the accuracy for image classification. Using GA and CNN can be more effective to optimize the filters, if the images are more complicated and enormous. Furthermore, it also can decrease the cost and time compared with human effort.

    審定書 .................................................................... I 中文摘要.................................................................. III 英文摘要.................................................................. IV 誌謝 ...................................................................... VI 目 錄 ..................................................................... VII 圖目錄 .................................................................... X 表目錄 .................................................................... XIV 名詞定義與對照........................................................... 1 第一章 緒論 .............................................................. 2 1.1 研究動機與目的 ............................................................ 2 1.2 組織與架構.................................................................. 3 第二章 文獻探討 ......................................................... 4 2.1 文獻回顧 .................................................................... 4 2.2 遺傳演算法.................................................................. 6 2.3 卷積神經網路 ............................................................... 7 2.3.1 LeNet-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3.2 AlexNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3.3 GoogLeNet網絡 . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4 與優化算法結合之相關方法................................................ 13 2.4.1 基於GA的SVM對水下目標分類器進行選擇與參數優化 . . . . 13 2.4.2 使用PSO進行深度學習模組參數優化 . . . . . . . . . . . . . . 15 2.4.3 進化演算法對自動機器學習之分類組合器 . . . . . . . . . . . 17 第三章 研究原理與方法................................................... 19 3.1 前言 ......................................................................... 19 3.2 遺傳演算法(Genetic Algorithm) ........................................... 20 3.2.1 遺傳演算法流程與說明 . . . . . . . . . . . . . . . . . . . . . . 20 VII 3.2.2 本文遺傳演算法使用之函式庫 . . . . . . . . . . . . . . . . . . 26 3.3 卷積神經網路(Convolutional Neural Network)............................ 27 3.3.1 卷積網路流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 遺傳演算法結合卷積神經網路搜尋最佳卷積核作影像分類之方法....... 41 3.4.1 方法設定與流程說明 . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 演算法相關設定 . . . . . . . . . . . . . . . . . . . . . . . . . . 51 第四章 實驗結果與討論................................................... 53 4.1 實驗環境 .................................................................... 53 4.2 實驗相關定義與解釋 ....................................................... 54 4.2.1 本實驗相關之圖片定義 . . . . . . . . . . . . . . . . . . . . . . 54 4.2.2 本實驗流程相關定義 . . . . . . . . . . . . . . . . . . . . . . . 56 4.3 實驗一: 以GA搜尋CNN最佳化卷積核作手寫數字影像分類 (MNIST影像資料庫) ....................................................... 57 4.3.1 以GA搜尋CNN卷積核作MNIST影像分類 . . . . . . . . . . . 61 4.3.2 經驗法則在各種尺寸卷積核之CNN辨識準確率分析 (正方形卷積核,線性比例放大) . . . . . . . . . . . . . . . . . 66 4.3.3 長寬比相同、面積不同卷積核之CNN辨識準確率分析 (長方形卷積核,線性比例放大) . . . . . . . . . . . . . . . . . 69 4.3.4 長寬比不同、面積相同卷積核之CNN辨識準確率分析 (長方形卷積核,長寬變化) . . . . . . . . . . . . . . . . . . . 72 4.3.5 一維與二維卷積核在實驗一中的CNN準確率比較 . . . . . . . 76 4.3.6 實驗一結語 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.4 實驗二: 以GA搜尋CNN最佳卷積核作十種彩色圖片分類(CIFAR10資料庫)...... 78 4.4.1 以GA搜尋CNN卷積核作CIFAR10影像分類 . . . . . . . . . . 82 4.4.2 經驗法則在各種尺寸卷積核之CNN辨識準確率分析 (正方形卷積核,線性比例放大) . . . . . . . . . . . . . . . . . 87 4.4.3 長寬比相同、面積不同卷積核之CNN辨識準確率分析 (長方形卷積核,線性比例放大) . . . . . . . . . . . . . . . . . 90 4.4.4 長寬比不同、面積相同卷積核之CNN辨識準確率分析 (長方形卷積核,長寬變化) . . . . . . . . . . . . . . . . . . . 93 VIII 4.4.5 實驗二時間效能分析 . . . . . . . . . . . . . . . . . . . . . . . 97 4.4.6 一維與二維卷積核在實驗二中的CNN準確率比較 . . . . . . . 98 4.4.7 實驗二結語 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.5 實驗三: 以GA搜尋CNN最佳化卷積核作五種彩色圖片分類 (Caltech 101資料庫)........................................................ 100 4.5.1 以GA搜尋CNN卷積核作Caltech 101五種彩色影像分類 . . . . 104 4.5.2 經驗法則卷積核之CNN各種尺寸分類之準確率分析 (正方形卷積核等比例放大) . . . . . . . . . . . . . . . . . . . 110 4.5.3 卷積核長寬比相同、面積不同之CNN辨識準確率分析 (長方形卷積核等比例放大) . . . . . . . . . . . . . . . . . . . 113 4.5.4 卷積核長寬比不同、面積相同之CNN辨識準確率分析 (面積相同、長寬變化之長方形卷積核分析) . . . . . . . . . . 116 4.5.5 一維與二維卷積核在實驗三中的CNN準確率比較 . . . . . . . 120 4.5.6 實驗三結語 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 第五章 結論與未來展望................................................... 123 參考文獻.................................................................. 124

    [1] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. 1998.
    [2] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
    [3] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,
    Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. 2015 IEEE Conference on Computer
    Vision and Pattern Recognition (CVPR), pages 1–9, 2014.
    [4] B. M. Sherin and M. H. Supriya. Ga based selection and parameter optimization
    for an svm based underwater target classifier. 2015 International Symposium
    on Ocean Electronics (SYMPOL), pages 1–7, 2015.
    [5] Basheer Qolomany, Majdi Maabreh, Ala I. Al-Fuqaha, Ajay Gupta, and Driss
    Benhaddou. Parameters optimization of deep learning models using particle
    swarm optimization. 2017 13th International Wireless Communications and
    Mobile Computing Conference (IWCMC), pages 1285–1290, 2017.
    [6] Jo˜ao Carlos Xavier, Alex Alves Freitas, Antonino Feitosa Neto, and
    Teresa Bernarda Ludermir. A novel evolutionary algorithm for automated machine learning focusing on classifier ensembles. 2018 7th Brazilian Conference
    on Intelligent Systems (BRACIS), pages 462–467, 2018.
    [7] F´elix-Antoine Fortin, Fran¸cois-Michel De Rainville, Marc-Andr´e Gardner, Marc
    Parizeau, and Christian Gagn´e. DEAP: Evolutionary algorithms made easy.
    Journal of Machine Learning Research, 13:2171–2175, jul 2012.
    [8] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep
    network training by reducing internal covariate shift. ArXiv, abs/1502.03167,
    2015.
    [9] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew
    Wojna. Rethinking the inception architecture for computer vision. 2016 IEEE
    124
    Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–
    2826, 2015.
    [10] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. Inceptionv4, inception-resnet and the impact of residual connections on learning. In
    AAAI, 2016.
    [11] 吳明穎. 國立交通大學機構典藏之遺傳演算法, 2004.
    [12] Jeffrey R. Sampson. Adaptation in natural and artificial systems (john h. holland). 1976.
    [13] Yann LeCun, D Touresky, G Hinton, and T Sejnowski. A theoretical framework
    for back-propagation. In Proceedings of the 1988 connectionist models summer
    school, volume 1, pages 21–28. CMU, Pittsburgh, Pa: Morgan Kaufmann, 1988.
    [14] Nilesh Patil. 2018 nilesh patil. powered by jekyll using the so simple theme.,
    2018.
    [15] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for
    large-scale image recognition. CoRR, abs/1409.1556, 2014.
    [16] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
    [17] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods
    for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–
    2159, 2010.
    [18] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks
    for machine learning, 4(2):26–31, 2012.

    無法下載圖示 Full text public date 2025/07/05 (Intranet public)
    Full text public date This full text is not authorized to be published. (Internet public)
    Full text public date This full text is not authorized to be published. (National library)
    QR CODE