研究生: |
吳昱賢 Yu-Hsien Wu |
---|---|
論文名稱: |
空間時序網路於火災偵測之應用 Spatio-Temporal Network with Application to Fire Detection |
指導教授: |
花凱龍
Kai-Lung Hua |
口試委員: |
鍾國亮
Kuo-Liang Chung 陳駿丞 Jun-Cheng Chen 楊傳凱 Chuan-Kai Yang 陳建中 Jiann-Jone Chen |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 資訊工程系 Department of Computer Science and Information Engineering |
論文出版年: | 2020 |
畢業學年度: | 108 |
語文別: | 英文 |
論文頁數: | 43 |
中文關鍵詞: | 空間時序網路 、火災偵測 |
外文關鍵詞: | Spatio-Temporal Network, Fire Detection |
相關次數: | 點閱:367 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
每年都有火災造成許多的生命及財產的損失,為了能夠減少這些損失,在火災發生前期的預警是非常重要的。在這篇論文中,我們提出了一個兩階段的架構來偵測火災。在第一階段,我們使用空間時序網路來找出可能為火的候選區,在時序網路,我們提出了一個在不同維度跨接的方法,另外我們也透過自注意力機制的模塊融合空間及時序網路提出的特徵。在第二階段,我們使用DenseNet判斷各個候選區是否為火。實驗結果證明,所提出的方法相較於其他現行方法具有更高的準確度。
There were lots of loss of life and property from fire every year. In order to reduce these losses, an early warning system is very important. In this thesis, we propose a two-stage architecture for fire detection. First, we employ the Spatio-Temporal Network to identify the region proposals that may be fire. In temporal stream, we propose a network which has skip connection between different dimensions. We also fuse spatio and temporal features via self-attention module. Second, we utilize DenseNet to further determine whether there is fire for the identified region proposals. The experimental results verified that the proposed idea outperforms current existing state-of-the-art methods.
[1] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested unet architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (DLMIA), pp. 3–11, Springer, Cham, 2018.
[2] B. EVARTS, “Fire loss in the united states during 2018.” National Fire Protection Association (NFPA), 2019.
[3] G. Marbach, M. Loepfe, and T. Brupbacher, “An image processing technique for fire detection in video images,” Fire Safety Journal, vol. 41, no. 4, pp. 285–289, 2006.
[4] W. Phillips Iii, M. Shah, and N. da Vitoria Lobo, “Flame recognition in video,” in 5th IEEE Workshop on Applications of Computer Vision (WACV), pp. 224–229, 2000.
[5] K. Muhammad, J. Ahmad, Z. Lv, P. Bellavista, P. Yang, and S. W. Baik, “Efficient deep cnnbased fire detection and localization in video surveillance applications,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems (SMC), vol. 49, pp. 1419–1434, 2019.
[6] A. J. Dunnings and T. P. Breckon, “Experimentally defined convolutional neural network architecture variants for nontemporal realtime fire detection,” in 25th IEEE Conference on Image Processing (ICIP), pp. 1558–1562, 2018.
[7] I.F. Chien, M. Shahid, and K.L. Hua, “Spatiotemporal networks for abnormal event detection,” in Multimedia Tools and Applications (MTAP), to appear.
[8] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional twostream network fusion for video action recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1933–1941, 2016.
[9] O. Ronneberger, P. Fischer, and T. Brox, “Unet: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI), pp. 234–241, Springer, 2015.
[10] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 35, no. 1, pp. 221–231, 2012.
[11] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Selfattention generative adversarial networks,” in International Conference on Machine Learning (ICML), pp. 7354–7363, 2019.
[12] A. Rosenfeld and J. L. Pfaltz, “Sequential operations in digital picture processing,” Journal of the ACM (JACM), vol. 13, no. 4, pp. 471–494, 1966.
[13] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708, 2017.
[14] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei, “ImageNet: A LargeScale Hierarchical Image Database,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.