簡易檢索 / 詳目顯示

研究生: 葉晴蓉
Ching-Rong Yeh
論文名稱: 基於YOLO之FDM雙色漸層物件智慧瑕疵檢測研究
Study of Intelligent Defect Detection of Dual-color Gradient FDM Object Based on YOLO
指導教授: 蔡明忠
Ming-Jong Tsai
口試委員: 李敏凡
Min-Fan Lee
張俊隆
Chun-Lung Chang
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 80
中文關鍵詞: YOLO積層製造熔融沉積成型瑕疵檢測曲面雙色漸層
外文關鍵詞: YOLO, Additive Manufacturing, Fused Deposition Modeling, Defect Detection, Curved Surfaces, Dual-color Gradient
相關次數: 點閱:383下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在製造業中,品質檢測是一個相當重要的環節,而積層製造在製造業中發展迅速,因能夠製造複雜的零件,其中熔融沉積成型(FDM)在消費者及企業層面是最廣泛使用的。近年來深度學習往智慧製造業的領域發展,將傳統的自動光學檢測與人工智慧技術結合進行多類別瑕疵檢測,搭配影像標記來訓練模型及辨識應用,提高物件偵測之準確性。本研究的目標是利用YOLOv5,應用於FDM雙色漸層曲面物件的智慧瑕疵檢測,由於雙色漸層曲面物件的形體凹凸或材料不相容,導致顏色不均等瑕疵,不易由傳統方式檢出,若在生產過程中,使用智慧瑕疵檢測系統進行品管檢查與分類,能夠及早發現問題從而減少人力與物料成本。本研究首先建立一個影像數據庫,為了快速取得訓練資料且可達到多角度觀察的效果,使用一個自動取像系統,其中包括兩個不同視角的取像鏡頭,以及可控制光源用於照射物體表面。從九個FDM物件進行取像,共獲得了1914張的資料集,並將資料集按照80%:20%的比例分割為訓練集(1531張)和驗證集(383張)。另外,使用額外的樣本作為測試集,以評估模型在實際情況下的表現。本研究使用YOLOv5模型,並在Epoch訓練100次的情況下比較不同模型版本(s、m、l、x)之間的性能差異。本研究針對九個不同樣品進行瑕疵檢測,分別含有六種類別進行瑕疵訓練與檢測,包括正常、翹曲、破裂、異物、縫隙以及顏色不均勻,同時在驗證集上,模型的平均性能表現均有達到90%以上,mAP@0.5數值也達到90%,此外使用測試集對模型進行檢測,亦可達到預期的結果,未來可望進一步應用於即時檢測的場景中。


    In the manufacturing industry, quality inspection is a critical process, and additive manufacturing has gained rapid development due to its capability of producing complex parts. Among various additive manufacturing (AM), Fused Deposition Modeling (FDM) is widely used by consumers and businesses. In recent years, deep learning has been applied to enhance quality inspection in the manufacturing sector by combining traditional AOI with AI, particularly in multi-class defect detection. This study aims to develop an intelligent defect detection system for FDM dual-color gradient objects based on the YOLOv5 method. Traditional inspection methods may struggle to detect defects caused by uneven colors from the shape or material incompatibility of dual-color gradient objects. By integrating an intelligent defect detection system into the production process, accuracy checks and classification can be performed to detect and address issues promptly, thus reducing costs. In this study, an image database is established by an automated imaging system equipped with two cameras and controllable lighting to quickly acquire data and enable multi-angle observation. A total of 1914 images are obtained from nine FDM objects, split into training (80%, 1531 images) and validation (20%, 383 images) sets. Testing sets evaluate the model's performance. The YOLOv5 model is utilized, and the performance among different versions (s, m, l, x) is compared after training for 100 epochs. The main objective of this study is to detect defects in nine different samples, including six categories: good, curling, break, foreign, gap, and uneven. The model achieves an average performance and a mAP@0.5 value of over 90% on the validation set. Furthermore, the model is tested using a testing set to assess its performance and application in real-time defect detection in the future.

    致謝 I 摘要 II ABSTRACT III 目錄 IV 圖目錄 VI 表目錄 VIII 第一章 緒論 1 1.1 前言 1 1.2 研究動機與目的 1 1.3 研究方法 2 1.4 本文架構 3 第二章 文獻探討與相關技術 5 2.1 文獻探討 5 2.2 積層製造 7 2.3 瑕疵檢測 9 2.4 自動化光學檢測 9 2.5 影像處理 10 2.6 深度學習模型評估 12 2.7 YOLO簡介 17 第三章 系統架構與研究方法 27 3.1 實驗方法 27 3.2 自動取像系統機構 30 3.3 影像處理 34 3.4 YOLOv5模型訓練 36 3.5 FDM 3D列印瑕疵分類 38 第四章 實驗結果與討論 42 4.1 自動取像結果 42 4.2 影像處理結果 43 4.3 YOLOv5模型訓練結果 44 第五章 結論與未來展望 61 5.1 結論 61 5.2 未來展望 62 參考文獻 64

    [1] P. Jakob, M. Madan, T. Schmid-Schirling, and A. Valada, "Multi-Perspective Anomaly Detection," Sensors, vol. 21, no. 16, p. 20, Aug 2021, Art no. 5311, doi: 10.3390/s21165311.
    [2] J. Li, Z. Su, J. Geng, and Y. Yin, "Real-time detection of steel strip surface defects based on improved yolo detection network," IFAC-PapersOnLine, vol. 51, no. 21, pp. 76-81, 2018.
    [3] D. W. Li et al., "Automatic defect detection of metro tunnel surfaces using a vision-based inspection system," Adv. Eng. Inform., vol. 47, p. 12, Jan 2021, Art no. 101206, doi: 10.1016/j.aei.2020.101206.
    [4] Y. Y. Xu, D. W. Li, Q. Xie, Q. Y. Wu, and J. Wang, "Automatic defect detection and segmentation of tunnel surface using modified Mask R-CNN," Measurement, vol. 178, p. 13, Jun 2021, Art no. 109316, doi: 10.1016/j.measurement.2021.109316.
    [5] J. H. Liu, F. Guo, H. Gao, M. Y. Li, Y. Zhang, and H. M. Zhou, "Defect detection of injection molding products on small datasets using transfer learning," J. Manuf. Process., vol. 70, pp. 400-413, Oct 2021, doi: 10.1016/j.jmapro.2021.08.034.
    [6] Z. X. Ma, Y. B. Li, M. H. Huang, Q. B. Huang, J. Cheng, and S. Tang, "A lightweight detector based on attention mechanism for aluminum strip surface defect detection," Comput. Ind., vol. 136, p. 14, Apr 2022, Art no. 103585, doi: 10.1016/j.compind.2021.103585.
    [7] D. Li, G. J. Bai, Y. Y. Jin, and Y. Tong, "Machine-Vision Based Defect Detection Algorithm for Packaging Bags," Laser Optoelectron. Prog., vol. 56, no. 9, p. 7, May 2019, Art no. 091501, doi: 10.3788/lop56.091501.
    [8] K. Paraskevoudis, P. Karayannis, and E. P. Koumoulos, "Real-Time 3D Printing Remote Defect Detection (Stringing) with Computer Vision and Artificial Intelligence," Processes, vol. 8, no. 11, p. 15, Nov 2020, Art no. 1464, doi: 10.3390/pr8111464.
    [9] A. Saluja, J. R. Xie, and K. Fayazbakhsh, "A closed-loop in-process warping detection system for fused filament fabrication using convolutional neural networks," J. Manuf. Process., vol. 58, pp. 407-415, Oct 2020, doi: 10.1016/j.jmapro.2020.08.036.
    [10] 王星萌, 「金屬零件瑕疵檢測,」 碩士論文, 生物機電工程系, 國立屏東科技大學, 屏東縣, 2018.
    [11] 施祈安, 「應用機器視覺與深度學習於鋼珠表面之瑕疵檢測, 」 碩士論文, 工業工程與管理系, 國立臺北科技大學, 台北市, 2021.
    [12] 蔡坤展, 「以AOI 技術作多種皮革之表面瑕疵檢測, 」 碩士論文, 電機工程系, 國立雲林科技大學, 雲林縣, 2017.
    [13] G. D. Goh, S. L. Sing, and W. Y. Yeong, "A review on machine learning in 3D printing: applications, potential, and challenges," (in English), Artif. Intell. Rev., Review vol. 54, no. 1, pp. 63-94, Jan 2021, doi: 10.1007/s10462-020-09876-9.
    [14] M. Salmi, "Additive Manufacturing Processes in Medical Applications," (in English), Materials, Review vol. 14, no. 1, p. 16, Jan 2021, Art no. 191, doi: 10.3390/ma14010191.
    [15] 鄭正元、 江卓培、林宗翰、林榮信、蔡明忠、賴韋祥等, 3D列印入門與應用. 全華圖書股份有限公司, 06501, 台北, 2023.
    [16] M. F. Khan et al., "Real-time defect detection in 3D printing using machine learning," Materials Today: Proceedings, vol. 42, pp. 521-528, 2021.
    [17] R. S. Lu, A. Wu, T. D. Zhang, and Y. H. Wang, "Review on Automated Optical (Visual) Inspection and Its Applications in Defect Detection," Acta Opt. Sin., Review vol. 38, no. 8, p. 36, Aug 2018, Art no. 0815002, doi: 10.3788/aos201838.0815002.
    [18] 姜禮豪, 「結合機器視覺進行3D列印之即時 監測系統,」 碩士論文, 光電系統工程系, 明新科技大學, 新竹縣, 2018.
    [19] M. Veluchamy and B. Subramani, "Image contrast and color enhancement using adaptive gamma correction and histogram equalization," Optik, vol. 183, pp. 329-337, 2019, doi: 10.1016/j.ijleo.2019.02.054.
    [20] C. Goutte and E. Gaussier, "A probabilistic interpretation of precision, recall and F-score, with implication for evaluation," in Advances in Information Retrieval, vol. 3408, Lecture Notes in Computer Science. Berlin: Springer-Verlag Berlin, 2005, pp. 345-359.
    [21] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The Pascal Visual Object Classes (VOC) Challenge," Int. J. Comput. Vis., vol. 88, no. 2, pp. 303-338, Jun 2010, doi: 10.1007/s11263-009-0275-4.
    [22] P. Henderson and V. Ferrari, "End-to-end training of object class detectors for mean average precision," in Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part V 13, 2017: Springer, pp. 198-213.
    [23] F. van Beers, A. Lindström, E. Okafor, and M. Wiering, "Deep neural networks with intersection over union loss for binary image segmentation," in Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, 2019: SciTePress, pp. 438-445.
    [24] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, "Soft-NMS--improving object detection with one line of code," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 5561-5569.
    [25] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
    [26] J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263-7271, 2017.
    [27] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
    [28] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020. Available: https://arxiv.org/abs/2004.10934
    [29] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7464-7475.
    [30] C. Li et al., "YOLOv6: A single-stage object detection framework for industrial applications," arXiv preprint arXiv:2209.02976, 2022. Available: https://arxiv.org/abs/2209.02976
    [31] J. Terven and D. Cordova-Esparza, "A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond," arXiv preprint arXiv:2304.00501, 2023. Available: https://arxiv.org/abs/2304.00501
    [32] O. E. Olorunshola, M. E. Irhebhude, and A. E. Evwiekpaefe, "A Comparative Study of YOLOv5 and YOLOv7 Object Detection Algorithms," Journal of Computing and Social Informatics, vol. 2, no. 1, pp. 1-12, 2023.
    [33] C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, "CSPNet: A new backbone that can enhance learning capability of CNN," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 390-391, 2020.
    [34] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, "Path aggregation network for instance segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8759-8768, 2018.
    [35] M. Tan, R. Pang, and Q. V. Le, "Efficientdet: Scalable and efficient object detection," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10781-10790, 2020.
    [36] Z. Huang, J. Wang, X. Fu, T. Yu, Y. Guo, and R. Wang, "DC-SPP-YOLO: Dense connection and spatial pyramid pooling based YOLO for object detection," Information Sciences, vol. 522, pp. 241-258, 2020.
    [37] Z. Zheng et al., "Enhancing geometric factors in model learning and inference for object detection and instance segmentation," IEEE Transactions on Cybernetics, vol. 52, no. 8, pp. 8574-8586, 2021.
    [38] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988.
    [39] 吳佳霖, 「自動化取像系統研製與深度學習之商品辨識應用,」 碩士論文, 自動化及控制研究所, 國立臺灣科技大學, 台北市, 2021.
    [40] 吳漢威, 「基於機器學習之自動化瑕疵檢測及應用於積層製造物件研究,」 碩士論文, 自動化及控制研究所, 國立臺灣科技大學, 台北市, 2022.

    無法下載圖示
    全文公開日期 2026/08/16 (校外網路)
    全文公開日期 2026/08/16 (國家圖書館:臺灣博碩士論文系統)
    QR CODE