簡易檢索 / 詳目顯示

研究生: 沈泰利
Tai-Li Shen
論文名稱: 基於聚類分析與即時分類之瑕疵檢測
Clustering and On-The-Fly Classification for Defect Detection
指導教授: 徐繼聖
Gee-Sern Hsu
口試委員: 陳亮光
Liang-Kuang Chen
張以全
I-Tsyuen Chang
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 87
中文關鍵詞: 聚類分析瑕疵檢測
外文關鍵詞: Clustering, Defect Detection
相關次數: 點閱:142下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 我們提出一個結合正常樣本聚類分析以及即時分類的陌生瑕疵辨識方法。本方法分為兩階段;學習階段、測試階段。於學習階段中,給定一組正常樣本,我們使用一個預訓練的ResNet-101網路抽取其特徵,ResNet-101網路於2015年度的ImageNet競賽中展現優異性能。隨後,提取出的特徵將經由K-means聚類演算法進行分群,並在各簇(Cluster)預設閾值,用以判別圖片是否含有可能的瑕疵。在測試階段中,若透過閾值收集到足夠數量的特殊特徵瑕疵樣本,它們將建構瑕疵預選簇,當測試更多樣本後,上述各正常簇的閾值以及瑕疵預選簇將會隨著測試更新,使性能可以隨時間進步。我們在MvTecAD資料庫上驗證了本方法的性能。


    We propose an approach that combines normal feature clustering and on-the-fly classification for unseen pattern recognition in defect detection. Our approach is divided into two phases, a learning phase and a testing phase. In the learning phase, we first extract the image features of normal data by using the ResNet-101, which has demonstrated a superb performance in the ImageNet 2015 competition. The extracted features are clustered by K-means, and the thresholds for determining possible defects are initially postulated. In the testing phase, the initially postulated thresholds will be adjusted when a sufficient number of testing features show some distinctive patterns from the normal clusters. The testing features will form defect candidate clusters. As more testing features are processed, the thresholds and the defect candidate clusters will be adjusted and verified on the fly so that the performance for detecting the unseen defect patterns will be improved over time. We have verified the performance of our approach on the MvTecAD dataset.

    摘要 Abstract 誌謝 圖目錄 第一章 介紹 1.1 研究背景和動機 1.2 方法概述 1.3 論文貢獻 1.4 論文架構 第二章 文獻回顧 2.1 Unsupervised Deep Embedded Clustering 2.2 Improved Deep Embedded Clustering with Local Structure Preservation (IDEC) 2.3 Clustering and Unsupervised Anomaly Detection with l2 Normalized Deep Auto-Encoder Representations (CUAD) 第三章 主要方法 3.1 資料特徵分析 3.2 Classification and On-The-Fly Classification (COTFC) 3.2.1 ResNet-101 特徵提取器 3.2.2 訓練階段 3.2.3 測試階段 3.3 聚類分析之簇數討論 第四章 實驗設置與分析 4.1 實驗設置 4.1.1 MvTecAD資料庫 4.1.2 資料劃分 21 4.1.3 K-means聚類結果 4.1.4 初始瑕疵閾值τ_k0 4.1.5 瑕疵預選簇 4.1.6 瑕疵預選簇累積樣本數 4.2 實驗結果與分析 4.2.1 即時分類更新率與即時分類消融比較 4.2.2 實驗結果的簇可視化及討論 4.2.2.1 簇可視化 4.2.2.2 結果討論 第五章 結論與未來研究方向 第六章 參考文獻 第七章 附錄 7.1 聚類目標函數折線圖 7.2 MvTecAD 各種類及對應瑕疵

    1. P. Bergmann, M. Fauser, D. Sattlegger and C. Steger, "MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9584-9592.

    2. Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48 (ICML'16). JMLR.org, 478–487.

    3. Xifeng Guo, Long Gao, Xinwang Liu, and Jianping Yin. 2017. Improved deep embedded clustering with local structure preservation. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17). AAAI Press, 1753–1759

    4. Aytekin, Ç., Ni, X., Cricri, F., Aksu, E.B. (2018). Clustering and Unsupervised Anomaly Detection with l2 Normalized Deep Auto-Encoder Representations. 2018 International Joint Conference on Neural Networks (IJCNN), 1-6. (CUAD)

    5. J. Deng, W. Dong, R. Socher, L. Li, Kai Li and Li Fei-Fei, "ImageNet: A large-scale hierarchical image database," 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248-255.

    6. K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.

    7. Lloyd, S. P. Least squares quantization in PCM. Technical Report RR-5497, Bell Lab, September 1957.

    8. G. Hsu, J. Chen and Y. Chung, "Application-Oriented License Plate Recognition," in IEEE Transactions on Vehicular Technology, vol. 62, no. 2, pp. 552-561, Feb. 2013.

    9. Huang, Gao, Zhuang Liu and Kilian Q. Weinberger. “Densely Connected Convolutional Networks.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 2261-2269.

    10. Jie, Hu, Li, Shen, and Gang, Sun. "Squeeze-and-Excitation Networks", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

    11. Tan, M., Le, Q., "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.", International Conference on Machine Learning, 2019.

    12. T.W. Ridler, S. Calvard, “Picture thresholding using an iterative selection method, IEEE Trans. System, Man and Cybernetics”, SMC-8 (1978) 630-632.

    13. D. Comaniciu and P. Meer, "Mean shift: a robust approach toward feature space analysis," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603-619, May 2002, doi: 10.1109/34.1000236.

    14. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv, abs/1704.04861.

    15. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6848-6856

    QR CODE