研究生: |
楊景翔 Ching-Hsiang Yang |
---|---|
論文名稱: |
基於聲音訊號之離線刀具磨耗檢測方法開發 Development of Offline Tool Wear Detection Method Based on Audio Signals |
指導教授: |
李維楨
Wei-chen Lee 王冬 Dong Wang |
口試委員: |
劉孟昆
郭俊良 Yamamoto Keisuke |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 機械工程系 Department of Mechanical Engineering |
論文出版年: | 2023 |
畢業學年度: | 111 |
語文別: | 英文 |
論文頁數: | 72 |
中文關鍵詞: | 離線 、刀具磨耗檢測 、深度學習 、音訊分類 |
外文關鍵詞: | offline, tool wear detection, deep learning, audio classification |
相關次數: | 點閱:498 下載:5 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
刀具磨耗對於製造產業來說是一個相當重要的議題,刀具的磨耗程度會直接的影響成品的精度,從而影響產品的品質。儘管有許多學者提出以多種感測器實現刀具磨耗監控(TCM)的方法,但放置許多感測器在加工環境中是很複雜的,導致很少有機械加工廠願意實施這些方法。為了克服這個問題,本研究提出了一種基於深度學習分類模型的離線聲音訊號刀具磨耗檢測方法。首先,我們會進行一個刀具磨耗實驗,同時利用刀刃磨耗狀況以及ISO標準來定義不同磨耗程度的刀具。考慮到手動收集音訊資料的不穩定性,便設計了一個機構來蒐集聲音訊號。接著使用快速傅立葉轉換(FFT)將音訊由時間域轉換為頻率域,以觀察刀具在不同磨耗程度的頻率變化,隨後進行歸一化(normalization)和頻率範圍提取(frequency range extraction),以確保訊號的可比性以及提取明顯的分類特徵。同時,為了提高模型的穩健性及泛化能力,使用數據擴增技術(data augmentation)在短時間內生成大量的訓練數據。最後,使用卷積神經網絡(CNN)構建離線工具磨損分類模型,使其能夠對3種不同磨耗程度的刀具(new, worn, damaged)進行分類。結果顯示,CNN模型在分類測試數據方面取得了令人印象深刻的99.32%的準確率,本研究提出的預處理方法也明顯提高了模型的準確性,從87.50%提高到99.32%,並且使用隨機選取的工具進行驗證確認其可預測性,使準確率達到了84.44%。本研究還與先前的研究進行了比較,證明了所提出的方法優於先前的研究。綜合上述結果,本研究證明了離線聲音訊號可用於刀具磨耗檢測,為機械加工廠檢測刀具磨耗程度提供了一種新的解決方案。
Tool wear is a crucial factor in the manufacturing industry that impacts efficiency, precision, and cost. Despite various studies suggesting the implementation of a Tool Wear Monitoring (TCM) system with various sensors, few machine shops have installed the system due to the high risk involved in placing high-priced sensors within the machining environment. To address this challenge, this study proposes an innovative technique to detect tool wear based on a deep learning classification model for offline audio signals under a controllable environment. First, we conduct tool wear experiments and use tool flank wear and ISO standard to distinguish different tool wear levels. Considering the instability of manually collecting audio data, an audio collection mechanism was designed to collect the offline audio signals. Next, the collected audio signals are transformed by Fast Fourier Transform (FFT) to observe frequency distribution variations, followed by normalization and frequency range extraction. To increase the generalization ability of the model, data augmentation techniques are used to collect a large amount of training data in a short time. Finally, the convolutional neural networks (CNN) are used to build the offline tool wear classification model. The results showed that the CNN model achieves an impressive 99.32 % accuracy in classifying test data. Proposed preprocessing significantly enhances the model's accuracy from 87.50% to 99.32%, and validation using randomly picked tools has confirmed its predictability with an accuracy of 84.44 %. This study also compared with previous research and proved the proposed method outperforms previous research. In conclusion, this study demonstrates that offline audio signals can be used for tool wear detection, providing a solution for machine shops to detect tool wear levels without placing expensive sensors in harsh machining environments.
[1] ISO 8688-2: 1989 (2016) Tool life testing in milling. Part 2-End milling. Int Stand, Int Orga Stand
[2] M.-K. Liu, Y.-H. Tseng, and M.-Q. Tran, "Tool wear monitoring and prediction based on sound
signal," The International Journal of Advanced Manufacturing Technology, vol. 103, no. 9, pp.
3361-3373, 2019.
[3] L. Móricz, Z. J. Viharos, A. Németh, A. Szépligeti, and M. Büki, "Off-line geometrical and
microscopic & on-line vibration based cutting tool wear analysis for micro-milling of ceramics,"
Measurement, vol. 163, p. 108025, 2020.
[4] R. Xie and D. Wu, "Optimal transport-based transfer learning for smart manufacturing: Tool wear
prediction using out-of-domain data," Manufacturing Letters, vol. 29, pp. 104-107, 2021
[5] T. Benkedjouh, N. Zerhouni, and S. Rechak, "Tool wear condition monitoring based on
continuous wavelet transform and blind source separation," The International Journal of
Advanced Manufacturing Technology, vol. 97, no. 9, pp. 3311-3323, 2018.
[6] H. Zheng and J. Lin, "A deep learning approach for high speed machining tool wear monitoring,"
in 2019 3rd International Conference on Robotics and Automation Sciences (ICRAS), 2019:
IEEE, pp. 63-68.
[7] S. Bagri, A. Manwar, A. Varghese, S. Mujumdar, and S. S. Joshi, "Tool wear and remaining useful
life prediction in micro-milling along complex tool paths using neural networks," Journal of
Manufacturing Processes, vol. 71, pp. 679-698, 2021.
[8] C. Madhusudana, H. Kumar, and S. Narendranath, "Face milling tool condition monitoring using
sound signal," International Journal of System Assurance Engineering and Management, vol. 8,
no. 2, pp. 1643-1653, 2017.
[9] Z. Li, X. Liu, A. Incecik, M. K. Gupta, G. M. Królczyk, and P. Gardoni, "A novel ensemble deep
learning model for cutting tool wear monitoring using audio sensors," Journal of Manufacturing
Processes, vol. 79, pp. 233-249, 2022.
[10] A. M. Alzahrani, R. Liu, and J. R. Kolodziej, "Acoustic assessment of an End Mill for Analysis
of tool wear," in Annual conference of the PHM society. September, 2018, pp. 24-27.
[11] A. Kothuru, S. P. Nooka, and R. Liu, "Audio-based tool condition monitoring in milling of the
workpiece material with the hardness variation using support vector machines and convolutional
neural networks," Journal of Manufacturing Science and Engineering, vol. 140, no. 11, p. 111006,
2018.
[12] Z. Li, R. Liu, and D. Wu, "Data-driven smart manufacturing: Tool wear monitoring with audio
signals and machine learning," Journal of Manufacturing Processes, vol. 48, pp. 66-76, 2019.
[13] F. J. Alonso and D. R. Salgado, “Application of singular spectrum analysis to tool wear detection
using sound signals,” vol. 219, no. 9, pp. 703-710, 2005.
[14] K. Palanisamy, D. Singhania, and Angela Yao, “Rethinking CNN models for audio classification.”
arXiv preprint arXiv : 2007.11154, 2020.
53
[15] M. A. Imtiaz and G. Raja, "Isolated word automatic speech recognition (ASR) system using
MFCC, DTW & KNN," in 2016 Asia pacific conference on multimedia and broadcasting
(APMediaCast), 2016: IEEE, pp. 106-110.
[16] S. Hershey et al., "CNN architectures for large-scale audio classification," in 2017 IEEE
international conference on acoustics, speech and signal processing (ICASSP), 2017: IEEE, pp.
131-135.
[17] F. Demir, M. Turkoglu, M. Aslan, and A. Sengur, "A new pyramidal concatenated CNN approach
for environmental sound classification," Applied Acoustics, vol. 170, p. 107520, 2020.
[18] S.-Y. Jung, C.-H. Liao, Y.-S. Wu, S.-M. Yuan, and C.-T. Sun, "Efficiently classifying lung sounds
through depthwise separable CNN models with fused STFT and MFCC features," Diagnostics,
vol. 11, no. 4, p. 732, 2021.
[19] N. Peng, A. Chen, G. Zhou, W. Chen, W. Zhang, J. Liu & F. Ding. “Environment sound
classification based on visual multi-feature fusion and GRU-AWS,” in IEEE Access, vol. 8, pp.
191100-191114, 2020
[20] R. V. Sharan and T. J. Moir, "Acoustic event recognition using cochleagram image and
convolutional neural networks," Applied Acoustics, vol. 148, pp. 62-66, 2019.
[21] J. Lee, J. Park, K. L. Kim, and J. Nam, "SampleCNN: End-to-end deep convolutional neural
networks using very small filters for music classification," Applied Sciences, vol. 8, no. 1, p. 150,
2018.
[22] T. Fawcett, "An introduction to ROC analysis: Pattern Recognition Letter, v. 27," 2006.