簡易檢索 / 詳目顯示

研究生: 李紀萱
Ji-Xuan Li
論文名稱: 基於視覺應用於CNC工具機之鐵屑定位與安全警示系統
Vision-Based Metal Chips Positioning and Safety Warning System for CNC Machine tools
指導教授: 蘇順豐
Shun-Feng Su
口試委員: 蘇順豐
Shun-Feng Su
黃有評
Yo-Ping Huang
陳美勇
Mei-Yung Chen
王乃堅
Nai-Jian Wang
林上智
Shang-Chih Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 70
中文關鍵詞: CNC工具機圖像辨識鐵屑定位安全警示系統網格分類影像處理
外文關鍵詞: CNC Machine Tool, Image recognition, Metal Chips Positioning, Safety warning system, Grid-based classification, Image Processing
相關次數: 點閱:153下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

本研究旨於使用基於視覺的方式應用在CNC工具機,其研究目標為鐵屑辨識與安全警示系統。在鐵屑辨識上,使用殘差神經網路(ResNet)作為圖像辨識的主要模型,並採用基於網格分類方法將機台影像分割成較小尺寸的輸入影像,根據鐵屑在機內呈現分散的分布狀態,此方法可以使資料標註的時間縮小。其在鋁屑測試集上的準確率(Accuracy)可達90.33%,但測試至陌生機台時降至84.00%。因此,我們聚焦於對不同機台鐵屑資料集的資料分配做討論,分析模型面對陌生機台型號時,訓練資料所需要的標註量。我們設置 六種不同數量的資料集,模型在經由基礎資料集(Base)加上2%新分類的資料訓練後,其在新機台測試數據集上的結果可提升至89.83%準確率與89.98%的F1 Score。此為往後做鐵屑辨識前的資料分配提供一個可靠的方法。
在安全警示系統上,其目的是確保操作員在工具機加工時的安全性。我們在影像中機內以及門框的位置分別架設一個ROI,以偵測機台移動與人員侵入的情形。系統有三個警示狀態(SAFETY、WARNING、DANGER)與五個警示等級,在本文中,我們使用背景相減法(Background Subtraction Method)與幀差法(Frame Difference Method)檢測機內環境影像變化,並在人物偵測方面提出兩種方法,分別為變量計算法(Variable Calculation Method)與動態背景更新法(Dynamically Update Background Image Method),結合兩者的偵測準確率達至99.06%。此外,整體系統的偵測準確率與靈敏度(Sensitivity)達
到98.03%,精確率(Precision)達到98.23%。


This study is based on a visual approach applied to CNC machine tools. The objectives of the research are metal chips recognition and the safety warning system. For metal chips recognition, a Residual Neural Network (ResNet) is employed as the model for image recognition. A grid-based classification method is adopted to segment the image into smaller input size. Depending on the scattered distribution of chips in the machine, this method can reduce the time for data labeling. The accuracy on the aluminum chip testing set reached 90.33% but dropped to 84.00% when tested on an unfamiliar machine. Therefore, we focus on the data allocation analysis of the dataset when facing new machine models. We set up six different datasets, after training the model with the Base dataset plus 2% of the newly classified data, the model achieves 89.83% accuracy and 89.98% F1 score on the testing set. The result provides a reliable recommendation for future data allocation before the recognition of metal chips.For safety warning system, the purpose is to ensure the security of the operator while machine processing. We built the ROI at work areas and safety door positions in the image to detect the machine movement and human intrusion. This system has SAFETY, WARNING, DANGER warning states and five warning levels. In this study, we use Background Subtraction Method and Frame Difference Method to detect image changes, and proposed two methods for human detection, which are Variable Calculation Method and Dynamically Update Background Image Method, the detection accuracy is 99.06% by combining these two methods. In addition, the overall system achieves 98.03% detection accuracy and sensitivity as well as 98.23% precision.

中文摘要 I Abstract II 致謝 III Table of Contents IV List of Figures VI List of Tables VIII Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivations 2 1.3 Contributions 4 1.4 Thesis Organization 6 Chapter 2 Related Work 7 Chapter 3 Metal Chips Recognition 10 3.1 Methodology 10 3.1.1 Problem setting 10 3.1.2 Grid-based classification and recognition 10 3.1.3 Network architecture 11 3.1.4 Loss function 12 3.1.5 K Fold Cross Validation 13 3.2 Experiments 14 3.2.1 Datasets 14 3.2.2 Evaluation Metric 17 3.2.3 Implementation Details 18 3.2.4 Results and analysis 19 Chapter 4 Safety Warning System 27 4.1 System Overview 27 4.2 Methodology 29 4.2.1 Select ROI 29 4.2.2 Define three warning states 30 4.2.3 Use Background Subtraction Method on ROI 1 31 4.2.4 Use Frame Difference Method on ROI 2 33 4.2.5 Use two methods to detect Human on ROI 1 35 4.2.6 Define five warning levels 38 4.3 Experiments 40 4.3.1 Background subtraction result of ROI 1 42 4.3.2 Frame difference result of ROI 2 44 4.3.3 Results of the two detection Human methods 46 4.3.4 Experimental Results and Evaluation Metrics 50 Chapter 5 Conclusions and Future work 54 5.1 Conclusions 54 5.2 Future work 55 References 56

References
[1] H. Kagermann, W. Wahlster, and J. Helbig, eds, "Recommendations for implementing the strategic initiative Industrie 4.0: Final report of the Industrie 4.0 Working Group," 2013.
[2] M. Hermann, T. Pentek, and B. Otto, "Design principles for industrie 4.0 scenarios," in 2016 49th Hawaii international conference on system sciences (HICSS), 2016: IEEE, pp. 3928-3937.
[3] J. Gu et al., "Recent advances in convolutional neural networks," Pattern recognition, vol. 77, pp. 354-377, 2018.
[4] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[5] J. R. Etherton, "Industrial machine systems risk assessment: A critical review of concepts and methods," Risk Analysis: An International Journal, vol. 27, no. 1, pp. 71-82, 2007.
[6] A. Moatari-Kazerouni, Y. Chinniah, and B. Agard, "A proposed occupational health and safety risk estimation tool for manufacturing systems," International Journal of Production Research, vol. 53, no. 15, pp. 4459-4475, 2015.
[7] H.-W. Lo, J. J. Liou, C.-N. Huang, and Y.-C. Chuang, "A novel failure mode and effect analysis model for machine tool risk analysis," Reliability Engineering & System Safety, vol. 183, pp. 173-183, 2019.
[8] A. C. Grindle, A. M. Dickinson, and W. Boettcher, "Behavioral safety research in manufacturing settings: A review of the literature," Journal of Organizational Behavior Management, vol. 20, no. 1, pp. 29-68, 2000.
[9] O. N. Aneziris et al., "Quantification of occupational risk owing to contact with moving parts of machines," Safety Science, vol. 51, no. 1, pp. 382-396, Jan 2013, doi: 10.1016/j.ssci.2012.08.009.
[10] Y. Chinniah, "Analysis and prevention of serious and fatal accidents related to moving parts of machinery," Safety science, vol. 75, pp. 163-173, 2015.
[11] W. Lee, M. Ratnam, and Z. Ahmad, "In-process detection of chipping in ceramic cutting tools during turning of difficult-to-cut material using vision-based approach," The International Journal of Advanced Manufacturing Technology, vol. 85, no. 5, pp. 1275-1290, 2016.
[12] B. Karabagli, T. Simon, and J.-J. Orteu, "A new chain-processing-based computer vision system for automatic checking of machining set-up application for machine tools safety," The International Journal of Advanced Manufacturing Technology, vol. 82, no. 9, pp. 1547-1568, 2016.
[13] Y. J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, "Autonomous structural visual inspection using region‐based deep learning for detecting multiple damage types," Computer‐Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 731-747, 2018.
[14] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, 2015.
[15] U. Župerl, K. Stepien, G. Munđar, and M. Kovačič, "A Cloud-Based System for the Optical Monitoring of Tool Conditions during Milling through the Detection of Chip Surface Size and Identification of Cutting Force Trends," Processes, vol. 10, no. 4, p. 671, 2022.
[16] W. König and H.-C. Möhring, "Cutting tool condition monitoring using eigenfaces," Production Engineering, pp. 1-16, 2022.
[17] X. Wang and Z. Hu, "Grid-based pavement crack analysis using deep learning," in 2017 4th international conference on transportation information and safety (ICTIS), 2017: IEEE, pp. 917-924.
[18] J. Shlens, "A tutorial on principal component analysis," arXiv preprint arXiv:1404.1100, 2014.
[19] J. Bian, W.-Y. Lin, Y. Matsushita, S.-K. Yeung, T.-D. Nguyen, and M.-M.
Cheng, "Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4181-4190.
[20] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
[21] C. Shorten and T. M. Khoshgoftaar, "A survey on image data augmentation for deep learning," Journal of big data, vol. 6, no. 1, pp. 1-48, 2019.
[22] W. Wang, J. Shen, and L. Shao, "Video salient object detection via fully convolutional networks," IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 38-49, 2017.
[23] D. Han, Q. Liu, and W. Fan, "A new image classification method using CNN transfer learning and web data augmentation," Expert Systems with Applications, vol. 95, pp. 43-56, 2018.
[24] Y. Chinniah, F. Gauthier, B. Aucourt, and D. Burlet-Vienney, "Validation of the impact of architectural flaws in six machine risk estimation tools," Safety Science, vol. 101, pp. 248-259, 2018.
[25] S.-G. Racz, R.-E. Breaz, and L.-I. Cioca, "Evaluating safety systems for machine tools with computer numerical control using analytic hierarchy process," Safety, vol. 5, no. 1, p. 14, 2019.
[26] S.-J. Zhang, Z.-L. Zhang, H.-Y. Yan, and S.-D. Wang, "Research on infrared safety protection system for machine tool," Guang pu xue yu Guang pu fen xi= Guang pu, vol. 28, no. 4, pp. 801-803, 2008.
[27] P. Rosin, "Thresholding for change detection," in Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), 1998: IEEE, pp. 274-279.
[28] M. Lei and J. Geng, "Fusion of three-frame difference method and background difference method to achieve infrared human target detection," in 2019 IEEE 1st International Conference on Civil Aviation Safety and Information Technology (ICCASIT), 2019: IEEE, pp. 381-384.
[29] L. He and L. Ge, "CamShift target tracking based on the combination of inter-frame difference and background difference," in 2018 37th Chinese Control Conference (CCC), 2018: IEEE, pp. 9461-9465.
[30] X. Han, Y. Gao, Z. Lu, Z. Zhang, and D. Niu, "Research on moving object detection algorithm based on improved three frame difference method and optical flow," in 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), 2015: IEEE, pp. 580-584.
[31] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
[32] J. Illingworth and J. Kittler, "A survey of the Hough transform," Computer vision, graphics, and image processing, vol. 44, no. 1, pp. 87-116, 1988.
[33] N. Sariff and N. Buniyamin, "An overview of autonomous mobile robot path planning algorithms," in 2006 4th student conference on research and development, 2006: IEEE, pp. 183-188.
[34] B. Patle, A. Pandey, D. Parhi, and A. Jagadeesh, "A review: On path planning strategies for navigation of mobile robot," Defence Technology, vol. 15, no. 4, pp. 582-606, 2019.
[35] H.-Y. Zhang, W.-M. Lin, and A.-X. Chen, "Path planning for the mobile robot: A review," Symmetry, vol. 10, no. 10, p. 450, 2018.

無法下載圖示 全文公開日期 2025/02/13 (校內網路)
全文公開日期 2028/02/13 (校外網路)
全文公開日期 2028/02/13 (國家圖書館:臺灣博碩士論文系統)
QR CODE