簡易檢索 / 詳目顯示

研究生: 范姜皓
Hao Fan-Chiang
論文名稱: AOI瑕疵影像深度學習卷積神經網路分類模型之研究
A Convolutional Neural Network for AOI Defect Classification
指導教授: 王孔政
Kung-Jeng Wang
口試委員: 歐陽超
Chao Ou-Yang
陳怡永
Yi-Yung Chen
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理系
Department of Industrial Management
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 45
中文關鍵詞: 瑕疵分類深度學習卷積神經網絡自動光學檢測
外文關鍵詞: defect classification, deep learning, convolutional neural network, automated optical inspection
相關次數: 點閱:661下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 電子產品微小化及對於良率要求極為嚴苛,因此在自動光學檢測(Automated Optical Inspection, AOI)容易因敏感度提高而出現過篩現象,常造成AOI瑕疵誤判及人力複檢成本提高。本研究針對AOI檢測不良瑕疵,透過卷積神經網路(convolutional neural network: CNN)設計四階段實驗,透過CNN瑕疵影像分類與學習,建立AOI瑕疵檢測辨識模型與機制,期望用來降低AOI誤判率及人力複檢成本。本研究二鏡頭AOI系統檢測電子產品,分別建立四個CNN模式。1、良品與不良品辨識CNN:分類料件為良品或不良品,鏡頭1及鏡頭2最終識別準準確率分別可達97.99%、90.15%。2、不良品類別分類CNN:鏡頭1最終可判別good 、class_A和 class_I;鏡頭2則為good、class_L和class_N,總共6種類別分類,最終識別準確率,鏡頭1、鏡頭2分別可達到89.97%、86.67%。3、多重瑕疵判別CNN: 輸入檢測項部位規格上下界值,作為分類分級瑕疵依據,同時在單一產品觀察多重瑕疵分類,最終識別準確率,鏡頭1、鏡頭2分別可達到82.42%、76.94%。4.細部瑕疵辨識CNN:以Faster-RCNN學習判別連結器細部瑕疵影像,於實驗中驗證同時辨別瑕疵類別與局部瑕疵細節位置,鏡頭2、鏡頭2分別可達到73.30%、81.52%準確率。綜言之,本研究除可進行細部瑕疵分類,可促進電子產業品質改善工程,輔助AOI檢測提高辨識正確率,減少AOI過篩情況。


    In generally, the miniaturization of electronic products and the requirement on yield rate are extremely demanding, causing Automated Optical Inspection (AOI) to have over sifting problem due to the increased sensitivity, which often leads to a rising in the costs of AOI error and re-inspection. In this study, we designed an experiment of four-stages against the defect detection in AOI, which uses convolution neural network(CNN) for image classification and establishing the model and mechanism for defect detection, expecting it can reduce the error rate in AOI and the retesting cost . We built four CNN models using CAM1 and CAM2 in the seven cameras of the AOI system. 1. CNN for identifying good and defective products: The materials are tagged as good or defective, and the final accuracy for CAM1 and CAM2 can reach 97.99% and 90.15% respectively. 2. CNN for category classification in defective products: There are six classes in total. The results for CAM1 are good, class_A and class_I; as for CAM2, there are good, class_L and class_N. The accuracy reach 89.97%, 86.67% for CAM1 and CAM2. 3. CNN for recognizing multiple defects in one product: We observe and classify multiple defects on the same product using its set boundaries. The final accuracy of CAM1 and CAM2 are 82.42% and 76.94%. 4. Faster R-CNN for detail defect classification: Using Faster-RCNN to verify the categories and locations in the detail parts of the images. The accuracy reach 73.30% and 81.52% for CAM1 and CAM2. In summary, the techniques such as classification in detail defects in this research contribute not only to the electronic industry, but also an improvement in quality engineering, which assists AOI to raise the accuracy of detection and reduces the situation in over sifting.

    Abstract I 摘要 IV 目錄 V 表目錄 VII 圖目錄 VIII 壹、背景 1 貳、文獻回顧 2 2.1 AOI系統架構及瑕疵分類 2 2.2卷積神經網路 2 2.3 Faster region CNN 4 參、研究方法 5 3.1 AOI系統架構 5 3.2 數據型態 8 (1)數據型態(Y)機器判別 9 (2) 數據型態(Y)人工判別 9 3.3 CNN模型 10 (1) 良品與不良品分類CNN模型 11 (2) 不良類型分類CNN模型 13 (3) 多重瑕疵分類CNN模型 13 (4) 多重瑕疵分類及瑕疵位置Faster R-CNN模型 14 肆、討論 17 4.1 實驗一--Go & no Go CNN模型實驗結果 17 4-2 實驗二--不良分類CNN模型實驗結果 18 4.3 實驗三--多重不良分類CNN模型實驗結果 20 4.4 實驗四--多重不良分類—Faster R-CNN模型實驗結果 22 伍、結論 26 參考文獻 28 附錄 : Output data for experiments in Chapter4 31

    Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Kudlur, M. (2016). Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16) (pp. 265-283).
    Cainap, C., Qin, S., Huang, W. T., Chung, I. J., Pan, H., Cheng, Y., ... & Gorbunova, V. (2015). Linifanib versus Sorafenib in patients with advanced hepatocellular carcinoma: results of a randomized phase III trial. Journal of Clinical Oncology, 33(2), 172.
    Cheng, J., Wu, J., Leng, C., Wang, Y., & Hu, Q. (2017). Quantized CNN: a unified approach to accelerate and compress convolutional networks. IEEE transactions on neural networks and learning systems, (99), 1-14.
    Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6), 681-685.
    Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for human detection. In international Confe Cheng, J., Wu, J., Leng, ce on computer vision & Pattern Recognition (CVPR'05) (Vol. 1, pp. 886-893). IEEE Computer Society.
    Felzenszwalb, P. F., & Huttenlocher, D. P. (2004). Efficient graph-based image segmentation. International Journal of Computer Vision, 59(2), 167-181.
    Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
    He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
    Iwahori, Y., Takada, Y., Shiina, T., Adachi, Y., Bhuyan, M. K., & Kijsirikul, B. (2018). Defect Classification of Electronic Board Using Dense SIFT and CNN. Procedia Computer Science, 126, 1673-1682.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
    LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
    Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
    Luo, W., Li, Y., Urtasun, R., & Zemel, R. (2016). Understanding the effective receptive field in deep convolutional neural networks. In Advances in neural information processing systems(pp. 4898-4906).
    Mrazova, I., & Kukacka, M. (2008, July). Hybrid convolutional neural networks. In 2008 6th IEEE International Conference on Industrial Informatics (pp. 469-474). IEEE.
    Rau, H., & Wu, C. H. (2005). Automatic optical inspection for detecting defects on printed circuit board inner layers. The International Journal of Advanced Manufacturing Technology, 25(9-10), 940-946.
    Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems(pp. 91-99).
    Ren, S., He, K., Girshick, R., Zhang, X., & Sun, J. (2016). Object detection networks on convolutional feature maps. IEEE transactions on pattern analysis and machine intelligence, 39(7), 1476-1481.
    Roh, B., Yoon, C., Ryu, Y., & Oh, C. (2001). A neural network approach to defect classification on printed circuit boards. JOURNAL-JAPAN SOCIETY FOR PRECISION ENGINEERING, 67(10), 1621-1626.
    Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
    Soukup, D., Bodenhofer, U., Mittendorfer-Holzer, M., & Mayer, K. (2009). Semi-automatic identification of print layers from a sequence of sample images: A case study from banknote print inspection. Image and Vision Computing, 27(8), 989-998.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.
    Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017, February). Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.
    Wang, W. C., Chen, S. L., Chen, L. B., & Chang, W. J. (2016). A machine vision based automatic optical inspection system for measuring drilling quality of printed circuit boards. IEEE Access, 5, 10817-10833.
    Wei, X., Yang, Z., Liu, Y., Wei, D., Jia, L., & Li, Y. (2019). Railway track fastener defect detection based on image processing and deep learning techniques: A comparative study. Engineering Applications of Artificial Intelligence, 80, 66-81.
    Zhang, M., Wu, J., Lin, H., Yuan, P., & Song, Y. (2017). The application of one-class classifier based on CNN in image defect detection. Procedia computer science, 114, 341-348.

    無法下載圖示 全文公開日期 2024/07/03 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE