簡易檢索 / 詳目顯示

研究生: 吳賢毅
Suranggi Gautama
論文名稱: 以電腦視覺和深度學習為基礎的觸控面板綜合AOI系統
Comprehensive AOI System with Computer Vision and Deep Learning for Touch Screen Panel
指導教授: 林其禹
Chyi-Yeu Lin
口試委員: 林柏廷
Po-Ting Lin
李維楨
Wei-Chen Lee
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 56
外文關鍵詞: touch scree panel
相關次數: 點閱:122下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • This research focuses on creating a parameter based comprehensive AOI system
    with integration help of computer vision and deep learning to detect the defects on touch screen panels. Deep learning is used for classification and localization of the defect and further analysis with computer vision library such as OpenCV.
    Two object detection models comprising; one-stage and two-stage are conducted
    and compared in this research to get desired accuracy and processing time. YOLOv5 is
    used for the one-stage object detection and Faster-RCNN is used for the two-stage
    object detection. The result shows that YOLOv5 could achieve 98.7% accuracy with
    0.14s processing time to detect 1 image FOV, while Faster-RCNN could excel in
    accuracy with 99.5% but slower processing time 0.21s for each FOV.
    Subsequent analysis to conducted to measure the defect and give evaluation on
    whether the defect is still acceptable (OK) or unacceptable (NG). Computer vision
    algorithms are used to measure every defect detected by the deep learning module with
    image processing including Gaussian Blur filter, Adaptive Threshold, and more from
    OpenCV library. Measurement from this algorithm is verified with manual
    measurement of image pixel and could achieve 96.7% accuracy with 0.17s processing
    time for each FOV.
    The overall algorithm proposed in this thesis has been tested on several samples
    and it is proven that the advanced AOI system that combines both computer vision and
    deep learning techniques can achieve the function of effectively detecting predefined
    defects within a short period of time. By using the AOI system, labor cost and
    processing time are greatly reduced compared to traditional manual inspection

    COVER i MASTER’S THESIS RECOMMENDATION FORM ii QUALIFICATION FORM iii ABSTRACT iv ACKNOWLEGEMENTS v TABLE OF CONTENTS vi LIST OF FIGURES ix LIST OF TABLES xi CHAPTER 1 INTRODUCTION 1 1.1 Research Background 1 1.2 Objective 2 1.3 Scope of Study 2 CHAPTER 2 LITERATURE REVIEW 3 2.1 Touch screen Panel 3 2.1.1 Capacitive Touch Screen 3 2.1.2 Resistive Touch Screen 4 2.1.3 Electrode Film 5 2.1.4 Touch Screen Tail 6 2.2 Automated Optical Inspection 7 2.2.1 Camera and Lens 8 2.2.2 Illumination 9 2.2.3 Background Color and Pattern 9 2.2.4 Survey of AOI Algorithm 10 CHAPTER 3 PROPOSED AOI ALGORITHM 14 3.1 Object Detection 14 3.1.1 Two-stage Object Detection 15 3.1.2 One-stage Object Detection 16 3.2 Faster R-CNN 17 3.2.1 Loss Function 19 3.3 YOLO 20 3.4 Performance Evaluation 23 3.4.1 Precision and Recall 24 CHAPTER 4 MACHINE DESIGN FOR AOI 25 4.1 Object 25 4.2 Camera and Lens 27 4.3 Illumination 28 4.4 Overall System 29 CHAPTER 5 RESULT AND DISCUSSION 31 5.1 Electrode Pattern Defect Detection Flowchart 31 5.2 Classification and Localization Electrode Film Defect 31 5.2.1 Dataset 32 5.2.2 Data augmentation 36 5.2.3 Result 37 5.3 Image Processing 41 5.3.1 Region of interest 41 5.3.2 Binary filter 42 5.3.3 Remove center region 42 5.3.4 Line thickness measurement 43 5.3.5 Result 43 5.4 Classification and Localization FPC Tail Defect 44 5.4.1 Dent Defect Detection 44 5.4.2 Thickness Measurement 47 5.5 Optimization 48 5.5.1 Multithreading algorithm 48 5.5.2 Result 48 CHAPTER 6 CONCLUSIONS AND FUTURE WORKS 49 6.1 CONCLUSIONS 49 6.2 FUTURE WORKS 50 REFERENCES 51

    [1] A. A. R. M. A. Ebayyeh and A. Mousavi, “A Review and Analysis of Automatic Optical Inspection and Quality Monitoring Methods in Electronics Industry,” IEEE Access, vol. 8. Institute of Electrical and Electronics Engineers Inc., pp. 183192–183271, 2020. doi: 10.1109/ACCESS.2020.3029127.
    [2] M. Chang, B. C. Chen, J. L. Gabayno, and M. F. Chen, “Development of An Optical Inspection Platform for Surface Defect Detection in Touch Panel Glass,” Int J Optomechatronics, vol. 10, no. 2, pp. 63–72, Apr. 2016, doi: 10.1080/15599612.2016.1166304.
    [3] J. Jiang, P. Cao, Z. Lu, W. Lou, and Y. Yang, “Surface Defect Detection for Mobile Phone Back Glass Based on Symmetric Convolutional Neural Network Deep Learning,” Applied Sciences (Switzerland), vol. 10, no. 10, May 2020, doi: 10.3390/app10103621.
    [4] H. Sharma, “A Review Paper on Touch Screen,” International Journal of Engineering Research & Technology, vol. 5, no. 23, 2017, [Online]. Available: https://www.google.co.in/search?q=Development+touch+sc
    [5] C. Li, H. Pan, J. Cai, and X. Chen, “Defects Detection of Mobile Phone Touch Screen Circuit Based on Machine Vision,” in Proceedings - 2020 Chinese Automation Congress, CAC 2020, Nov. 2020, vol. 2020-January. doi: 10.1109/CAC51589.2020.9387469.
    [6] M. Guo and R. Wang, “The Introduction of AOI in PCB Defect Detection Based on Linear Array Camera,” in International Forum on Management, Education and Information Technology Application, 2016, pp. 767–770.
    [7] W. Ming et al., “A Comprehensive Review of Defect Detection in 3C Glass Components,” Measurement (Lond), vol. 158, Jul. 2020, doi: 10.1016/j.measurement.2020.107722.
    [8] H.-C. Liao, Z.-Y. Lim, Y.-X. Hu, and H.-W. Tseng, “Guidelines of Automated Optical Inspection (AOI) System Development,” 2018 IEEE 3rd International Conference on Signal and Image Processing, pp. 362–366, 2018.
    [9] D. Lu and Q. Weng, “A survey of image classification methods and techniques for improving classification performance,” International Journal of Remote Sensing, vol. 28, no. 5. Taylor and Francis Ltd., pp. 823–870, 2007. doi: 10.1080/01431160600746456.
    [10] S. S. Nath, G. Mishra, J. Kar, S. Chakraborty, and N. Dey, “A survey of image classification methods and techniques,” in 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies, ICCICCT 2014, Dec. 2014, pp. 554–557. doi: 10.1109/ICCICCT.2014.6993023.
    [11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Jun. 2015, [Online]. Available: http://arxiv.org/abs/1506.02640
    [12] L. Jiao et al., “A survey of deep learning-based object detection,” IEEE Access, vol. 7, pp. 128837–128868, 2019, doi: 10.1109/ACCESS.2019.2939201.
    [13] W. Kim, H. Oh, Y. Kwak, K. Park, B. K. Ju, and K. Kim, “Development of A Carbon Nanotube-based Touchscreen Capable of Multi-touch and Multi-force Sensing,” Sensors (Switzerland), vol. 15, no. 11, pp. 28732–28741, Nov. 2015, doi: 10.3390/s151128732.
    [14] S. J. Kim, T. H. Phung, S. Kim, M. K. Rahman, and K. S. Kwon, “Low-Cost Fabrication Method for Thin, Flexible, and Transparent Touch Screen Sensors,” Adv Mater Technol, vol. 5, no. 9, Sep. 2020, doi: 10.1002/admt.202000441.
    [15] S. Rao Muligar, P. Prasada, and S. Rao, “Resistive Touch Screen Controller Implementation on ARM7 CPU,” Int J Sci Eng Res, vol. 4, no. 5, pp. 2075–2079, 2013, [Online]. Available: http://www.ijser.org
    [16] K. J. Sun, L. D. Keon, and K. C. Kyu, “US 9,941,877 B2,” 2018
    [17] “Connectors and Tails,” Transparent Products, Inc. https://www.touchpage.com/options/connectors/ (accessed Dec. 13, 2022).
    [18] O. Hecht and G. Dishon, “Automatic optical inspection (AOI),” in Proceedings - Electronic Components and Technology Conference, 1990, vol. 1, pp. 659–661. doi: 10.1109/ectc.1990.122259.
    [19] R. T. Ctlin, “AUTOMATED VISUAL INSPECTION TECHNIQUES AND APPLICATIONS: A BIBLIOGRAPHY,” Pattern Recognit, vol. 15, no. 4, pp. 343–357, 1982.
    [20] E. N. Malamas, G. M. Petrakis, M. Zervakis, L. Petit, and J.-D. Legat, “A Survey on Industrial Vision Systems, Applications and Tools,” Image Vis Comput, vol. 21, pp. 171–188, 2003, [Online]. Available: www.elsevier.com/locate/imavis
    [21] Wang Yue et al., “Vision Based Hole Crack Detection,” in Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference, 2015, pp. 1932–1936.
    [22] S. H. Huang and Y. C. Pan, “Automated visual inspection in the semiconductor industry: A survey,” Computers in Industry, vol. 66. Elsevier B.V., pp. 1–10, Jan. 01, 2015. doi: 10.1016/j.compind.2014.10.006.
    [23] W. C. Wang, S. L. Chen, L. B. Chen, and W. J. Chang, “A Machine Vision Based Automatic Optical Inspection System for Measuring Drilling Quality of Printed Circuit Boards,” IEEE Access, vol. 5, pp. 10817–10833, 2017, doi: 10.1109/ACCESS.2016.2631658.
    [24] P. Dollár, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Trans Pattern Anal Mach Intell, vol. 34, no. 4, pp. 743–761, 2012, doi: 10.1109/TPAMI.2011.155.
    [25] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361. doi: 10.1109/CVPR.2012.6248074.
    [26] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” International Journal Computer Vision, vol. 115, no. 3, pp. 211–252, Dec. 2014, [Online]. Available: http://arxiv.org/abs/1409.0575
    [27] M. Everingham, L. van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int J Comput Vis, vol. 88, no. 2, pp. 303–338, Jun. 2010, doi: 10.1007/s11263-009-0275-4.
    [28] T.-Y. Lin et al., “Microsoft COCO: Common Objects in Context,” May 2014, [Online]. Available: http://arxiv.org/abs/1405.0312
    [29] A. Kuznetsova et al., “The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale,” arXiv:1811.00982, Nov. 02, 2018. http://arxiv.org/abs/1811.00982 (accessed Dec. 22, 2022).
    [30] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 6, Jun. 2015, [Online]. Available: http://arxiv.org/abs/1506.01497
    [31] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Sep. 2014, pp. 580–587. doi: 10.1109/CVPR.2014.81.
    [32] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large- Scale Image Recognition,” in 3rd International Conference on Learning Representations ICLR, Sep. 2014. Accessed: Dec. 23, 2022. [Online]. Available: http://arxiv.org/abs/1409.1556
    [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-December, pp. 770–778. doi: 10.1109/CVPR.2016.90.
    [34] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2117–2125.
    [35] W. Liu et al., “SSD: Single Shot MultiBox Detector,” in ECCV 2016, Oct. 2016. doi: 10.1007/978-3-319-46448-0_2.
    [36] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Nov. 2017, vol. 2017-January, pp. 6517–6525. doi: 10.1109/CVPR.2017.690.
    [37] T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for Dense Object Detection,” IEEE Trans Pattern Anal Mach Intell, vol. 42, no. 2, pp. 318– 327, Feb. 2020, doi: 10.1109/TPAMI.2018.2858826.
    [38] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” Apr. 2018, Accessed: Dec. 23, 2022. [Online]. Available: http://arxiv.org/abs/1804.02767
    [39] Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: Fully Convolutional One-Stage Object Detection,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV) , 2019.

    QR CODE