簡易檢索 / 詳目顯示

研究生: 林頡昇
Chieh-Sheng Lin
論文名稱: 一個基於多任務學習與深度圖像語義分割之金屬加工件瑕疵檢測模型
A Defect Detection Model Based on Multi-Task Learning and Deep Image Semantic Segmentation for Machined Metal Parts
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 王聖智
Sheng-Jyh Wang
謝君偉
Jun-Wei Hsieh
馮輝文
Huei-Wen Ferng
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 74
中文關鍵詞: 金屬加工件瑕疵檢測圖像語義分割多任務學習深度學習瑕疵區域重合度
外文關鍵詞: machined metal parts, defect detection, image semantic segmentation, multi-task learning, deep learning, Intersection over Union of defect areas
相關次數: 點閱:254下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 瑕疵檢測對於工業生產線來說至關重要,如果有瑕疵的產品在品管部門沒有被檢查出來而流入市面,輕則影響消費者觀感而引起客訴事件,重則引發產品爆炸火災,造成身命財產安全疑慮進而影響公司聲譽,所以能在產品出廠前就把有瑕疵的部件篩選出來是工業生產線的一大挑戰。
    早期藉由人工的方式篩選瑕疵,因為人員會產生疲勞和注意力不集中的情況,所以瑕疵漏檢率會偏高。近年來由機器執行的自動瑕疵檢測被引入工業生產線的比例有逐漸升高的趨勢,尤其它具有不會疲憊倦怠和可二十四小時運作等優點,使人工檢測逐漸被淘汰。自動瑕疵檢測最初是使用傳統影像處理的方式,由人工定義瑕疵影像特徵,再交由機器做運算分類;隨著人工智慧技術的發展,越來越多由機器自我學習瑕疵特徵並自動分類的機器學習方法被提出,也能進一步提升瑕疵部件被檢出的準確率,成為現今一大主流方法。
    本論文提出了一個使用人工智慧影像辨識的瑕疵檢測模型,我們主要目標是判斷金屬加工件上是否有瑕疵產生同時定位瑕疵位置,所以使用影像語義分割的方式,基於一個經典的編碼器解碼器架構,應用殘差網路加深編碼器深度,提高編碼器特徵擷取能力,且不會有網路退化的問題發生;同時從編碼器中挑選特定路徑中的特徵圖複製到後層的解碼器做上採樣使用,為後面較深層的網路提供較淺層的豐富特徵做圖像還原的判斷;最後結合多任務學習技術,運用兩組解碼器讓兩個相關任務一起經由我們的瑕疵檢測模型進行學習,同時共享前面編碼器的權重參數,主任務為瑕疵區域的影像分割,副任務為金屬加工件本體的影像分割,兩個任務透過多任務學習一起訓練,從而提升主任務瑕疵區域的分割精確率。
    在實驗中,我們使用真實工業生產線的金屬加工件影像,來做為瑕疵檢測對象;應用本文所提出的方法可以有效的對金屬加工件上的瑕疵區域進行定位,其中在瑕疵區域重合度評估上,我們的瑕疵檢測模型比經典的編碼器解碼器架構還要高出5.7%,達到86.3%的瑕疵區域重合度,並且使用多任務學習比不使用時的瑕疵區域重合度高出2.8%,依此顯示多任務學習的有效性;從實驗結果可知,本文提出的瑕疵檢測模型可在真實生產線上對金屬加工件有良好的瑕疵檢測能力。


    Defect detection is very important for industrial production lines. If defective products are not detected by the quality control department and flow into the market, it will affect consumer perception and cause customer complaints, or cause product explosions and fires, resulting in life and property safety Doubts affect the reputation of company, so it is a big challenge for the industrial production line to screen out defective parts before the product leaves the factory.
    In the early days, manual methods were used to screen defects. Because personnel would be fatigued and inattentive, the rate of missing defects would be higher. But, in recent years, it has gradually increased that the proportion of automatic defect detection performed by machines is introduced into industrial production lines. Particularly, it has the advantages of no fatigue and 24 hours operation, so that manual detection is gradually eliminated. Automatic defect detection initially uses traditional image processing methods. In this manner, we manually define the characteristics of defect images, which are then fed to the machine for calculation and classification. With the development of artificial intelligence technology, more and more machines are self-learning defect features and automatically classify defects. The learning method is proposed, which can further improve the accuracy of detecting defective parts, and has become a major mainstream method today.
    This thesis proposes a defect detection model using artificial intelligence image recognition. Our main goal is to determine whether there are defects on the machined metal parts and to locate the defect positions, so we use image semantic segmentation as the main defect detection method. Our model is based on a classic encoder decoder architecture. We use residual networks to deepen the encoder depth and to improve the feature extraction ability of the encoder without degradation problems. In the same time, the feature maps in the specific path are copied from the encoder to the decoder for upsampling. In order to provide the richer features of the shallower layer for the deeper network to judge the image; Finally, combined with multi-task learning technology, using multiple sets of decoders to let two related tasks learn together through our defect detection model, while sharing the weight parameters of the previous encoder. The main task is the image segmentation of the defect area, and the secondary task is the image segmentation of the body of machined metal parts. The two tasks are trained together through multi-task learning to effectively improve the defect area segmentation accuracy of the main task.
    In the experiment, we use the images of the machined metal parts in the real industrial production line as the defect detection object. Our proposed defect detection model can effectively locate the defect area on the machined metal parts. In the evaluation of the Intersection over Union of defect areas, our defect detection model achieves 86.3% that is higher than the classic encoder decoder architecture by 5.7%. And our defect detection model with multi-task learning is higher than it without multi-task learning by 2.8%, which shows the effectiveness of multi-task learning. From the experimental results, it can be seen that the defect detection model proposed in this thesis has better defect detection ability for machined metal parts in real production lines.

    中文摘要 ....................................................................... i Abstract .................................................................... iii 誌謝 ...........................................................................v Contents ..................................................................... vi List of Figures ............................................................ viii List of Tables ................................................................ x Chapter 1 Introduction ........................................................ 1 1.1 Overview .................................................................. 1 1.2 Motivation ................................................................ 2 1.3 System Description ........................................................ 3 1.4 Organization of Thesis .................................................... 4 Chapter 2 Related Work ........................................................ 5 2.1 Traditional Image Processing Techniques ................................... 5 2.2 Machine Learning Techniques ............................................... 6 Chapter 3 Image Semantic Segmentation ......................................... 9 3.1 Encoder .................................................................. 11 3.1.1 Convolution Layer ...................................................... 11 3.1.2 Deep Residual Network .................................................. 14 3.2 Decoder .................................................................. 17 3.2.1 Interpolation .......................................................... 18 3.2.2 Transposed Convolution ................................................. 22 3.3 Feature Merge ............................................................ 29 Chapter 4 Multi-Task Network for Image Semantic Segmentation ................. 31 4.1 Multi-task Learning ...................................................... 31 4.2 Our Defect Detection Model ............................................... 35 Chapter 5 Experimental Results and Discussions ............................... 37 5.1 Experimental Setup ....................................................... 37 5.1.1 Introduction of Machined Metal Parts ................................... 37 5.1.2 Image Acquisition ...................................................... 40 5.1.3 Image Annotation ....................................................... 42 5.1.4 Data Processing ........................................................ 43 5.1.5 Experimental Environment ............................................... 44 5.2 Defect Detection Results of Machined Metal Parts ......................... 44 Chapter 6 Conclusions and Future Works ....................................... 53 6.1 Conclusions .............................................................. 53 6.2 Future Work .............................................................. 54 References ................................................................... 56

    [1] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, pp. 234-241, 2015.
    [2] K. He et al., “Deep residual learning for image recognition,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, pp. 770-778, 2016.
    [3] G. Agin, “Computer Vision Systems for Industrial Inspection and Assembly,” Computer, vol. 13, no. 5, pp. 11-20, 1980. doi: 10.1109/MC.1980.1653613.
    [4] T. Piironen et al., “Automated visual inspection of rolled metal surfaces,” Machine Vision and Applications, vol. 3, no. 4, pp. 247-254, 1990. doi: 10.1007/BF01211850.
    [5] C. S. Tikhe and J.S.Chitode, “Metal Surface Inspection for Defect Detectionand Classification using Gabor Filter,” International Journal of Innovative Research in Science, Engineering and Technology, vol. 3, no. 6, pp. 13702-13709, 2014.
    [6] A. Kumar and G. K. H. Pang, “Defect detection in textured materials using Gabor filters,” IEEE Transactions on Industry Applications, vol. 38, no. 2, pp. 425-440, 2002. doi: 10.1109/28.993164.
    [7] S. Liu and H. Jing, “Scratch detection in metal surface by blasting using Gabor filters,” in Proceedings of the International Conference on Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices, Suzhou, China, vol. 9686, pp. 96860W, 2016.
    [8] Y. Zhang, T. Li, and Q. Li, “Defect detection for tire laser shearography image using curvelet transform based edge detector,” Optics & Laser Technology, vol. 47, pp. 64-71, 2013. doi: 10.1016/j.optlastec.2012.08.023.
    [9] F. Mirzaei et al., “Automated Defect Detection of Weldments and Castings using Canny , Sobel and Gaussian filter Edge Detectors : A Comparison Study,” in Proceedings of the 4th International Conference on Non Destructive Testing, Tehran, Iran, 2017.
    [10] M. Miura et al., “High-accuracy image matching using phase-only correlation and its application,” in Proceedings of the International Conference on Society of Instrument and Control Engineers Annual Conference, Akita, Japan, pp. 307-312, 2012.
    [11] V. Asha, N. U. Bhajantri, and P. Nagabhushan, “GLCM–based chi–square histogram distance for automatic detection of defects on patterned textures,” International Journal of Computational Vision and Robotics, vol. 2, no. 4, pp. 302-313, 2011. doi: 10.1504/IJCVR.2011.045267.
    [12] J. L. Raheja, S. Kumar, and A. Chaudhary, “Fabric defect detection based on GLCM and Gabor filter: A comparison,” Optik, vol. 124, no. 23, pp. 6469-6474, 2013. doi: 10.1016/j.ijleo.2013.05.004.
    [13] S. F. Attali and F. S. Cohen, “Surface Inspection Based On Stochastic Modelling,” in Proceedings of the International Conference on Optical Techniques for Industrial Inspection, Quebec, Canada, pp. 46-52, 1986.
    [14] F. S. Cohen, Z. Fan, and S. Attali, “Automated Inspection of Textile Fabrics Using Textural Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 803-808, 1991. doi: 10.1109/34.85670.
    [15] G. R. Cross and A. K. Jain, “Markov Random Field Texture Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 5, no. 1, pp. 25-39, 1983. doi: 10.1109/tpami.1983.4767341.
    [16] S. Ozdemir and A. Ercil, “Markov random fields and Karhunen-Loeve transforms for defect inspection of textile products,” in Proceedings of the International Conference on Emerging Technologies and Factory Automation, Kauai, Hawaiian, vol. 2, pp. 697-703, 1996.
    [17] D. P. Brzakovic, N. S. Vujovic, and A. Liakopoulos, “Approach to quality control of texture web materials,” in Proceedings of the International Conference on Machine Vision Applications, Architectures, and Systems Integration IV, Philadelphia, Pennsylvania, vol. 2597, pp. 60-69, 1995.
    [18] D. P. Brzakovic et al., “A generalized development environment for inspection of web materials,” in Proceedings of the International Conference on Robotics and Automation, Albuquerque, New Mexico, vol. 1, pp. 1-8, 1997.
    [19] K. Y. Song et al., “Chromato-structural approach toward surface defect detection in random textured images,” in Proceedings of the International Conference on Machine Vision Applications in Industrial Inspection II, San Jose, California, vol. 2183, pp. 193-204, 1994.
    [20] J. G. Campbell et al., “Linear flaw detection in woven textiles using model-based clustering,” Pattern Recognition Letters, vol. 18, no. 14, pp. 1539-1548, 1997. doi: 10.1016/S0167-8655(97)00148-7.
    [21] S. Iyer and S. K. Sinha, “A robust approach for automatic detection and segmentation of cracks in underground pipeline images,” Image and Vision Computing, vol. 23, no. 10, pp. 921-933, 2005. doi: 10.1016/j.imavis.2005.05.017.
    [22] S. K. Sinha and P. W. Fieguth, “Automated detection of cracks in buried concrete pipe images,” Automation in Construction, vol. 15, no. 1, pp. 58-72, 2006. doi: 10.1016/j.autcon.2005.02.006.
    [23] S. K. Sinha and P. W. Fieguth, “Segmentation of buried concrete pipe images,” Automation in Construction, vol. 15, no. 1, pp. 47-57, 2006. doi: 10.1016/j.autcon.2005.02.007.
    [24] Z. Xue-wu et al., “A vision inspection system for the surface defects of strongly reflected metal based on multi-class SVM,” Expert Systems with Applications, vol. 38, no. 5, pp. 5930-5939, 2011. doi: 10.1016/j.eswa.2010.11.030.
    [25] P. Wang et al., “The Copper Surface Defects Inspection System Based on Computer Vision,” in Proceedings of the International Conference on Natural Computation, Jinan, China, vol. 3, pp. 535-539, 2008.
    [26] N. Kanopoulos, N. Vasanthavada, and R. L. Baker, “Design of an image edge detection filter using the Sobel operator,” IEEE Journal of Solid-State Circuits, vol. 23, no. 2, pp. 358-367, 1988. doi: 10.1109/4.996.
    [27] M.-K. Hu, “Visual pattern recognition by moment invariants,” Institute of Radio Engineers Transactions on Information Theory, vol. 8, no. 2, pp. 179-187, 1962. doi: 10.1109/TIT.1962.1057692.
    [28] H. Sison, P. Konghuayrob, and S. Kaitwanidvilai, “A Convolutional Neural Network for Segmentation of Background Texture and Defect on Copper Clad Lamination Surface,” in Proceedings of the International Conference on Engineering, Applied Sciences, and Technology, Phuket, Thailand, pp. 1-4, 2018.
    [29] S. Satorres Martinez et al., “Quality inspection of machined metal parts using an image fusion technique,” Measurement, vol. 111, pp. 374-383, 2017. doi: 10.1016/j.measurement.2017.08.002.
    [30] C. Jian, J. Gao, and Y. Ao, “Automatic surface defect detection for mobile phone screen glass based on machine vision,” Applied Soft Computing, vol. 52, pp. 348-358, 2017. doi: 10.1016/j.asoc.2016.10.030
    [31] Ç. Aytekin et al., “Railway fastener inspection by real-time machine vision,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 45, no. 7, pp. 1101-1107, 2015. doi: 10.1109/TSMC.2014.2388435.
    [32] O. O. I. JiWei, T. A. Y. Lee Choo, and L. A. I. Weng Kin, “Bottom-hat filtering for Defect Detection with CNN Classification on Car Wiper Arm,” in Proceedings of the International Conference on Signal Processing Its Applications, Penang, Malaysia, pp. 90-95, 2019.
    [33] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979. doi: 10.1109/TSMC.1979.4310076.
    [34] J. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, 1986. doi: 10.1109/TPAMI.1986.4767851.
    [35] R. Ren, T. Hung, and K. C. Tan, “A Generic Deep-Learning-Based Approach for Automated Surface Inspection,” IEEE Transactions on Cybernetics, vol. 48, no. 3, pp. 929-940, 2018. doi: 10.1109/TCYB.2017.2668395.
    [36] J. Donahue et al., “DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition,” in Proceedings of the International Conference on Machine Learning, Beijing, China, vol. 32, no. 1, pp. 647-655, 2014.
    [37] V. Natarajan et al., “Convolutional networks for voting-based anomaly classification in metal surface inspection,” in Proceedings of the International Conference on Industrial Technology, Toronto, Canada, pp. 986-991, 2017.
    [38] S. Kim et al., “Transfer learning for automated optical inspection,” in Proceedings of the International Conference on Neural Networks, Anchorage, Alaska, vol. 1, pp. 2517-2524, 2017.
    [39] A. Mujeeb et al., “One class based feature learning approach for defect detection using deep autoencoders,” Advanced Engineering Informatics, vol. 42, no. 1, p. 100933, 2019. doi: 10.1016/j.aei.2019.100933.
    [40] C. Feng et al., “Deep active learning for civil infrastructure defect detection and classification,” Computing in Civil Engineering, 2017, pp. 298-306.
    [41] X. Tao et al., “Automatic Metallic Surface Defect Detection and Recognition with Convolutional Neural Networks,” Applied Sciences, vol. 8, no. 9, p. 1575, 2018. doi: 10.3390/app8091575.
    [42] L. Song et al., “Weak Micro-Scratch Detection Based on Deep Convolutional Neural Network,” IEEE Access, vol. 7, pp. 27547-27554, 2019. doi: 10.1109/ACCESS.2019.2894863.
    [43] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” 2016, arXiv:1603.07285.
    [44] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, pp. 3431-3440, 2015.
    [45] L. Duong et al., “Low Resource Dependency Parsing: Cross-lingual Parameter Sharing in a Neural Network Parser,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, China, vol. 2, pp. 845-850, 2015, doi: 10.3115/v1/P15-2139.
    [46] Y. Yang and T. M. Hospedales, “Trace Norm Regularised Deep Multi-Task Learning,” 2017, arXiv:1606.04038.

    無法下載圖示 全文公開日期 2025/08/18 (校內網路)
    全文公開日期 2030/08/18 (校外網路)
    全文公開日期 2030/08/18 (國家圖書館:臺灣博碩士論文系統)
    QR CODE