簡易檢索 / 詳目顯示

研究生: 吳漢威
Han-Wei Wu
論文名稱: 基於機器學習之自動化瑕疵檢測及應用於積層製造物件研究
Study of Machine Learning based Automatic Defect Inspection and Application for Additive Manufacturing Object
指導教授: 蔡明忠
Ming-Jong, Tsai
口試委員: 李敏凡
Min-Fan Lee
李維楨
Wei-chen Lee
林建憲
Jian-Xian Lin
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 102
中文關鍵詞: 機器學習積層製造自動化光學檢測瑕疵檢測YOLOv5
外文關鍵詞: Machine Learning, Automatic Optical Inspection, Additive Manufacturing, Defect Detection, YOLOv5
相關次數: 點閱:192下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 追求零缺陷是製造業品質檢測(TQC)最終的目標,為此發展AI技術與傳統AOI檢測的結合是目前產業發展的趨勢。本研究的目的為發展基於機器學習的瑕疵檢測並應用在積層製造物件上,首先自行建立瑕疵影像數據庫來完成瑕疵辨識與分類。為了快速且自動的取得瑕疵圖像資料,運用一台可自動取得多角度影像的取像系統,使用了2個不同視角的鏡頭,並以可控制光源照射物體表面。本研究中以5個樣本(含五種瑕疵)進行取像,共獲得1440張的資料集。本研究將資料集分割成64%:16%:20%的比例做為訓練、驗證和測試集,因此共有922張的訓練圖像作為模型輸入。
    本研究使用YOLOv5訓練模型,在Epoch都為100的情況下,比較不同版本間模型的效能差異。本研究的瑕疵檢測模型主要針對同一樣品的瑕疵檢測,利用兩階段的模型將瑕疵檢測的過程分成兩個模型去辨識。第一階段只有3類:正常品、半成品和瑕疵品;第二階段則是將在第一階段中被分類成瑕疵品類別檢測4種瑕疵:破洞、邊緣翹曲、破裂和污漬。使用兩階段的模型與單一模型相比,瑕疵的分類準確率都有將近10%的提升,同時驗證集平均檢測準確率達到接近90%的水準。再透過80張圖像進行檢測,平均推論時間只要14 ms,未來可以應用於即時檢測的場景當中。


    Zero defect is the final goal of total quality control in manufacturing. For this reason, the combination of AI and AOI is current development trend. The purpose of this study is to develop a machine learning based automatic defect inspection and application for additive manufacturing object. First, a defect image database is established to complete the defect identification and classification. In order to automatically acquire defect image data quickly, an imaging mechanism is employed to obtain automatically multi-angle images. It has two cameras with two different angles to take surface and side image of an object under a forward lighting source. Through the system, a total of 1440 images can be obtained from five different additive manufacturing samples which include 5 different defects. In this study, the dataset is divided into a ratio of 64%:16%:20% as training, validation and test sets, in which 922 training images are used as training model input.
    This study employs YOLOv5 as training method to build a defect detection model. With the Epoch of 100, the study also compared the performance of the models with different versions. The defect detection model in this study mainly aims at the same type object. Using a two-stage method to divide the process of defect detection into two models for classification and identification. There are three classes in the first stage model: Normal, semi-finished product and defective product. The second stage is to classify the defect object into four classes: hole, curling edge, break and stain. By using the two-stage model compared to the single one, the recognition precision of defects has been improved by upto 10%. From the validation results, the model’s mean average precision(mAP) can reach to almost 90%. Through 80 sample images for inspection, the average inference time is only 14 milliseconds, which can be applied to real-time defect detection in the future.

    致謝 I 摘要 II ABSTRACT III 目錄 IV 圖目錄 VI 表目錄 VIII 第一章 緒論 1 1.1 前言 1 1.2 研究動機與目的 2 1.3 研究方法 2 1.4 本文架構 3 第二章 文獻探討與相關技術 5 2.1 文獻探討 5 2.2 瑕疵檢測 7 2.3 自動化光學檢測(Automatic Optical Inspection, AOI) 7 2.4 影像處理 10 2.4.1 色彩模型 10 2.4.2 灰階化與二值化 10 2.5 積層製造(Additive Manufacturing) 11 2.6 機器學習 13 2.6.1 模型評估 14 2.7 物件偵測 17 2.7.1 評估指標 17 2.7.2 YOLOv5 21 第三章 系統架構和研究方法 28 3.1 實驗方法 28 3.2 實驗硬體架構 29 3.2.1 自動取像系統硬體介紹 30 3.2.2 自動取像系統軟體控制 35 3.3 程式架構 36 3.3.1 自動取像系統 36 3.3.2 影像處理 38 3.3.3 模型訓練與建立 39 3.4 3D列印瑕疵分類 42 第四章 實驗結果與討論 45 4.1 相機解析度 45 4.1.1 視野範圍 45 4.1.2 視角計算與相機標定 46 4.2 光源強度測試 50 4.2.1 不同光源強度灰階分布 52 4.3 自動取像系統結果 54 4.4 影像處理結果 55 4.5 YOLO v5實驗結果討論 57 4.5.1 單一模型分類 58 4.5.2 兩階段模型分類 65 第五章 結論與未來展望 83 5.1 結論 83 5.2 未來展望 84 參考文獻 85

    [1] M. Amaris, R. Y. de Camargo, M. Dyab, A. Goldman, and D. Trystram. “A comparison of GPU execution time prediction using machine learning and analytical modeling,” in 2016 IEEE 15th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, USA, pp. 326-333, Oct. 31 - Nov. 02, 2016. DOI: 10.1109/NCA.2016.7778637.
    [2] M. Abu Ebayyeh and A. Mousavi, “A review and analysis of automatic optical inspection and quality monitoring methods in electronics industry,” IEEE Access, vol. 8, pp.183192-183271, Oct. 6, 2020, DOI: 10.1109/access.2020.3029127.
    [3] P. Jakob, M. Madan, T. Schmid-Schirling, and A. Valada, “Multi-perspective anomaly detection,” Sensors, vol. 21, no. 16, pp.1-20, Aug. 6, 2021, DOI: 10.3390/s21165311.
    [4] X. M. Lv, F. J. Duan, J. J. Jiang, X. Fu, and L. Gan, “Deep active learning for surface defect detection,” Sensors, vol. 20, no. 6, pp.1-12, Mar. 16, 2020, DOI: 10.3390/s20061650.
    [5] J. C. Wang, G. D. Yi, S. Y. Zhang, and Y. Wang, “An unsupervised generative adversarial network-based method for defect inspection of texture surfaces,” Applied Sciences-Basel, vol. 11, no. 1, pp.1-15, Dec. 30, 2020, DOI: 10.3390/app11010283.
    [6] D. Li, G. J. Bai, Y. Y. Jin, and Y. Tong, “Machine-vision based defect detection algorithm for packaging bags,” Laser & Optoelectronics Progress, vol. 56, no. 9, pp.1-7, Jul. 5, 2019, DOI: 10.3788/lop56.091501.
    [7] K. Paraskevoudis, P. Karayannis, and E. P. Koumoulos, “Real-time 3D printing remote defect detection (stringing) with computer vision and artificial intelligence,” Processes, vol. 8, no. 11, pp.1-15, Nov. 16, 2020, DOI: 10.3390/pr8111464.
    [8] A. Saluja, J. R. Xie, and K. Fayazbakhsh, “A closed-loop in-process warping detection system for fused filament fabrication using convolutional neural networks,” Journal of Manufacturing Processes, vol. 58, pp. 407-415, Oct. 2020, DOI: 10.1016/j.jmapro.2020.08.036.
    [9] Y. X. Li, W. Zhao, Q. S. Li, T. C. Wang, and G. Wang, “In-situ monitoring and diagnosing for fused filament fabrication process based on vibration sensors,” Sensors, vol. 19, no. 11, pp.1-18, Jun. 2019, DOI: 10.3390/s19112589.
    [10] Z. H. Ren, F. Z. Fang, N. Yan, and Y. Wu, “State of the art in defect detection based on machine vision,” International Journal of Precision Engineering and Manufacturing-Green Technology, vol. 9, no. 2, pp. 661-691, Mar. 2022, DOI: 10.1007/s40684-021-00343-6.
    [11] Z. Li, X. Han, L. Y. Wang, T. Y. Zhu, and F. T. Yuan, “Feature Extraction and Image Retrieval of Landscape Images Based on Image Processing, ” Traitement Du Signal, vol. 37, no. 6, pp. 1009-1018, Dec 2020, DOI: 10.18280/ts.370613.
    [12] G. D. Goh, S. L. Sing, and W. Y. Yeong, “A review on machine learning in 3D printing: applications, potential, and challenges,” Artificial Intelligence Review, vol. 54, no. 1, pp. 63-94, Jul. 16, 2021, DOI: 10.1007/s10462-020-09876-9.
    [13] J. Bozic, D. Tabernik, and D. Skocaj, “Mixed supervision for surface-defect detection: from weakly to fully supervised learning,” Computers in Industry, vol. 129, Aug., 2021, DOI: 10.1016/j.compind.2021.103459.
    [14] S. Mei, H. Yang, and Z. P. Yin, “An unsupervised-learning-based approach for automated defect inspection on textured surfaces,” IEEE Transactions on Instrumentation and Measurement, vol. 67, no. 6, pp. 1266-1277, Jun., 2018, DOI: 10.1109/tim.2018.2795178.
    [15] I. A. Okaro, S. Jayasinghe, C. Sutcliffe, K. Black, P. Paoletti, and P. L. Green, “Automatic fault detection for laser powder-bed fusion using semi-supervised machine learning,” Additive Manufacturing, vol. 27, pp. 42-53, May, 2019, DOI: 10.1016/j.addma.2019.01.006.
    [16] S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement learning: tutorial, review, and perspectives on open problems,” arXiv:2005.01643, 2020. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2020arXiv200501643L
    [17] C. Goutte and E. Gaussier, “A probabilistic interpretation of precision, recall and F-score, with implication for evaluation,” in Advances in Information Retrieval, Santiago de Compostela, Spain, Mar. 21-23, 2005, pp. 345-359.
    [18] Z. Q. Zhao, P. Zheng, S. T. Xu, and X. D. Wu, “Object detection with deep learning: a review, ” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212-3232, Nov. 30, 2019, DOI: 10.1109/tnnls.2018.2876865.
    [19] A. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: a review,” Artificial Intelligence Review, vol. 52, no. 2, pp. 927-948, Aug. 4, 2019, DOI: 10.1007/s10462-018-9650-2.
    [20] R. K. Kadu, P. J. Assudani, M. Jaiswal, D. Bist, and A. Tickoo, “Road lane detection system for self driving car,” International Journal of Next-Generation Computing, vol. 12, no. 5, pp. 739-746, Nov. 26, 2021. DOI: 10.47164/ijngc.v12i5.466.
    [21] A. I. Khan and S. Al-Habsi, “Machine learning in computer vision,” Procedia Computer Science, vol. 167, pp. 1444-1451, 2020, DOI: 10.1016/j.procs.2020.03.355.
    [22] D. J. Li and R. H. Li, “Mug defect detection method based on improved faster rcnn, ” Laser & Optoelectronics Progress, vol. 57, no. 4, Feb. 20, 2020, DOI: 10.3788/lop57.041515.
    [23] X. Y. Xu, M. Zhao, P. Shi, R. Ren, X. He, X. Wei, and H. Yang, “Crack detection and comparison study based on faster r-cnn and mask r-cnn,” Sensors, vol. 22, no. 3, Feb 2022, DOI: 10.3390/s22031215.
    [24] H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union: a metric and a loss for bounding box regression, ” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 15-20, pp. 658-666,2019, DOI: 10.1109/CVPR.2019.00075.
    [25] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: faster and better learning for bounding box regression,”in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 12993-13000, Apr. 3, 2020, DOI: 10.1609/aaai.v34i07.6999.
    [26] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv:2004.10934, 2020. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2020arXiv200410934B.
    [27] C.-Y. Wang, H.-Y. M. Liao, I.-H. Yeh, Y.-H. Wu, P.-Y. Chen, and J.-W. Hsieh, “CSPNet: a new backbone that can enhance learning capability of cnn,” arXiv:1911.11929, 2019. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2019arXiv191111929W.
    [28] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” arXiv:1803.01534, 2018. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv180301534L.
    [29] M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and Efficient Object Detection, ” arXiv:1911.09070, 2019. [Online] Available: https://ui.adsabs.harvard.edu/abs/2019arXiv191109070T.
    [30] Z. C. Huang and J. L. Wang, “DC-SPP-YOLO: Dense connection and spatial pyramid pooling based YOLO for object detection,” Information Sciences, vol. 522, pp. 241-258, Jun. 2020, DOI: 10.1016/j.ins.2020.02.067.
    [31] J. Yao, J. M. Qi, J. Zhang, H. M. Shao, J. Yang, and X. Li, “A Real-Time Detection Algorithm for Kiwifruit Defects Based on YOLOv5, ” Electronics, vol. 10, no. 14, Jul. 2021, doi: 10.3390/electronics10141711.
    [32] 吳佳霖, “自動化取像系統研製與深度學習之商品辨識應用, ” 國立臺灣科技大學, 2021.
    [33] 吳佳霖、蔡明忠、吳漢威、王俊皓, “自動化取像系統研製與應用”,全國機構與機器設計學術研討會(CSMMT 2021),台灣,新竹,10月29-30日,2021,No. 64。
    [34] M. F. Khan, A. Alam, M. A. Siddiqui, M. S. Alam, Y. Rafat, N. Salik, and L. AI-Saidan,“Real-time defect detection in 3D printing using machine learning,” Materials Today: Proceedings, vol. 42, pp. 521-528, 2021, DOI: 10.1016/j.matpr.2020.10.482.
    [35] B. Huang and S. P. Zou, “A New Camera Calibration Technique for Serious Distortion,” Processes, vol. 10, no. 3, Feb. 28, 2022, DOI: 10.3390/pr10030488.

    無法下載圖示 全文公開日期 2024/08/28 (校內網路)
    全文公開日期 2027/08/28 (校外網路)
    全文公開日期 2027/08/28 (國家圖書館:臺灣博碩士論文系統)
    QR CODE