研究生: |
董紹偉 Shao-Wei Dong |
---|---|
論文名稱: |
物聯網邊緣攝影機之低光影像動態去眩光及細節恢復 Low-light Image Enhancement with Dynamically-activated De-glaring and Details Recovery for IoT-enabled Edge Cameras |
指導教授: |
陸敬互
Ching-Hu Lu |
口試委員: |
陸敬互
Ching-Hu Lu 蘇順豐 Shun-Feng Su 黃正民 Cheng-Min Huang 許嘉裕 Chia-Yu Hsu 鍾聖倫 Sheng-Luen Chung |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 電機工程系 Department of Electrical Engineering |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 中文 |
論文頁數: | 87 |
中文關鍵詞: | 低光影像去眩光 、輕量化深度網路 、動態影像去眩光 、邊緣運算 、物聯網 |
外文關鍵詞: | low-light image de-glaring, lightweight neural network, dynamic de-glaring, edge computing, Internet of Things |
相關次數: | 點閱:529 下載:2 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
低光影像經常嚴重影響電腦視覺系統服務的穩定性。隨著物聯網 (IoT) 的發展,結合人工智慧的邊緣計算技術之攝影機 (以下簡稱邊緣攝影機) 已能夠提高基於影像強化之物聯網服務的強健性。近年來已有研究採用深度神經網路來進行低光影像強化,而其中非配對學習無須成對的訓練數據,相較於配對學習方式更為彈性,且沒有人工生成數據與現實圖像物理性質不相同之缺點。然而,既有非配對學習低光影像強化研究皆沒有考量低光影像出現眩光之情況,會導致影像品質大幅下降。為了提升圖像品質,本研究首度提出可套用於既有低光強化研究之上的額外強化模組,包括透過「輕量化低光影像去眩光網路」去除低光影像中之眩光,以及透過「低光影像細節恢復網路」將去除眩光後之低光圖像強化邊緣細節,再次提升圖像生成的品質。實驗結果顯示,在既有研究上額外加入本研究之輕量化低光影像去眩光網路後,自然圖像質量評估指標 (NIQE) 平均可降低7.36%、無參考感知圖像質量評估指標 (PIQE) 可降低9.76%、無參考空間圖像質量評估指標 (BRISQUE) 可降低13.31%。而加入本研究之低光影像細節恢復網路後,可使去眩光後之圖像品質更進一步提升,NIQE平均可再下降1.2%、PIQE下降3.64%、BRISQUE下降3.34%。接著,由於本研究為額外加上之強化模組,為了有效利用邊緣攝影機之運算資源,避免對圖像做不必要之強化,本研究加入「動態影像去眩光偵測模型」評估低光影像中是否存在眩光,作為是否需要經過去眩光處理的依據。實驗結果顯示,整合此偵測模型以及前述網路模型後,在低光眩光圖像占比為0.4時,能使平均運行時間減少4%、FPS增加4.17%;在占比為0.2時(接近實際應用的情境),能使平均運行時間減少24.62%、FPS增加32.66%。以上說明動態影像狀態的評估可在實際應用上增加邊緣攝影機之運行效率。
Low-light images often seriously affect the stability of a computer-vision system. With the development of Internet of Things (IoT), a camera leveraging artificial intelligence and edge computing (hereafter referred to as an edge camera) can enhance the robustness of an IoT service. In recent years, research has been conducted using deep neural networks for low-light image enhancement, in which unpaired learning does not require paired training data, which is more flexible than paired learning. However, existing studies of unpaired learning low-light image enhancement do not consider the glare in low-light images, which can lead to significant degradation of image quality. To improve image quality, our study proposes the first additional enhancement module that can be applied to existing studies. First, the proposed "lightweight low-light image de-glaring network" can remove glare from low-light images. Next, the proposed "low-light image detail recovery network" can enhance the boundary details of low-light images after removing glare to improve the image quality again. Experimental results show that our lightweight low light image de-glaring network can reduced NIQE by 7.36%, PIQE by 9.76%, and BRISQUE by 13.31%. Our low-light image detail recovery network can further improve the quality of the de-glared images by reducing NIQE by 1.2%, PIQE by 3.64%, and BRISQUE by 3.34%. In addition, since our study is implemented as an additional enhancement module, in order to effectively utilize the computational resources of an edge camera and avoid unnecessary image enhancement, we additionally propose "dynamic de-glaring" to assess the quality of input images first for determining if de-glaring should be undertaken. Experimental results show that running time reduced by 24.62% and FPS improved by 32.66% at a glare low-light image ratio of 0.2 (close to the real-world application scenario).
[1] B. Marr, "The 5 Biggest Internet Of Things (IoT) Trends In 2022," 2021/12/13 2021.
[2] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, "Edge computing: Vision and challenges," IEEE internet of things journal, vol. 3, no. 5, pp. 637-646, 2016.
[3] M. G. S. Murshed, C. Murphy, D. Hou, N. Khan, G. Ananthanarayanan, and F. Hussain, "Machine learning at the network edge: A survey," ACM Computing Surveys (CSUR), vol. 54, no. 8, pp. 1-37, 2021.
[4] W. Wang, X. Wu, X. Yuan, and Z. Gao, "An experiment-based review of low-light image enhancement methods," Ieee Access, vol. 8, pp. 87884-87917, 2020.
[5] Q. Xu, H. Jiang, R. Scopigno, and M. Sbert, "A novel approach for enhancing very dark image sequences," Signal processing, vol. 103, pp. 309-330, 2014.
[6] S. Liu et al., "Enhancement of low illumination images based on an optimal hyperbolic tangent profile," Computers & Electrical Engineering, vol. 70, pp. 538-550, 2018.
[7] K. Singh, D. K. Vishwakarma, G. S. Walia, and R. Kapoor, "Contrast enhancement via texture region based histogram equalization," Journal of modern optics, vol. 63, no. 15, pp. 1444-1450, 2016.
[8] A. S. Parihar and O. P. Verma, "Contrast enhancement using entropy-based dynamic sub-histogram equalisation," IET Image Processing, vol. 10, no. 11, pp. 799-808, 2016.
[9] S. Kansal, S. Purwar, and R. K. Tripathi, "Image contrast enhancement using unsharp masking and histogram equalization," Multimedia Tools and Applications, vol. 77, no. 20, pp. 26919-26938, 2018.
[10] E. H. Land and J. J. McCann, "Lightness and retinex theory," Josa, vol. 61, no. 1, pp. 1-11, 1971.
[11] X. Guo, "LIME: A method for low-light image enhancement," pp. 87-91.
[12] R. Chandrasekharan and M. Sasikumar, "Fuzzy transform for contrast enhancement of nonuniform illumination images," Ieee signal processing letters, vol. 25, no. 6, pp. 813-817, 2018.
[13] C. Jung, Q. Yang, T. Sun, Q. Fu, and H. Song, "Low light image enhancement with dual-tree complex wavelet transform," Journal of Visual Communication and Image Representation, vol. 42, pp. 28-36, 2017.
[14] A. Toet, "Colorizing single band intensified nightvision images," Displays, vol. 26, no. 1, pp. 15-21, 2005.
[15] J. Li, S. Z. Li, Q. Pan, and T. Yang, "Illumination and motion-based video enhancement for night surveillance," pp. 169-175: IEEE.
[16] X. Dong et al., "Fast efficient algorithm for enhancement of low lighting video," pp. 1-6: IEEE.
[17] J. Song, L. Zhang, P. Shen, X. Peng, and G. Zhu, "Single low-light image enhancement using luminance map," pp. 101-110: Springer.
[18] J. Pang, S. Zhang, and W. Bai, "A novel framework for enhancement of the low lighting video," pp. 1366-1371: IEEE.
[19] G. Mandal, D. Bhattacharya, and P. De, "Real-time automotive night-vision system for drivers to inhibit headlight glare of the oncoming vehicles and enhance road visibility," Journal of Real-Time Image Processing, vol. 18, no. 6, pp. 2193-2209, 2021.
[20] W. Liu et al., "Ssd: Single shot multibox detector," pp. 21-37: Springer.
[21] J. Liu, D. Xu, W. Yang, M. Fan, and H. Huang, "Benchmarking low-light image enhancement and beyond," International Journal of Computer Vision, vol. 129, no. 4, pp. 1153-1184, 2021.
[22] K. G. Lore, A. Akintayo, and S. Sarkar, "LLNet: A deep autoencoder approach to natural low-light image enhancement," Pattern Recognition, vol. 61, pp. 650-662, 2017.
[23] L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, "LLCNN: A convolutional neural network for low-light image enhancement," pp. 1-4: IEEE.
[24] C. Wei, W. Wang, W. Yang, and J. Liu, "Deep retinex decomposition for low-light enhancement," arXiv preprint arXiv:1808.04560, 2018.
[25] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image denoising with block-matching and 3D filtering," vol. 6064, p. 606414: International Society for Optics and Photonics.
[26] Y. Zhang, J. Zhang, and X. Guo, "Kindling the darkness: A practical low-light image enhancer," pp. 1632-1640.
[27] A. Sharma and R. T. Tan, "Nighttime Visibility Enhancement by Increasing the Dynamic Range and Suppression of Light Effects," pp. 11977-11986.
[28] H. Wu, S. Zheng, J. Zhang, and K. Huang, "Fast end-to-end trainable guided filter," pp. 1838-1847.
[29] I. Goodfellow et al., "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014.
[30] J. Wang, W. Tan, X. Niu, and B. Yan, "RDGAN: Retinex decomposition based adversarial learning for low-light enhancement," pp. 1186-1191: IEEE.
[31] Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, "A new low-light image enhancement algorithm using camera response model," pp. 3015-3022.
[32] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[33] G. Kim, D. Kwon, and J. Kwon, "Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training," pp. 2811-2815: IEEE.
[34] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, "Improved training of wasserstein gans," Advances in neural information processing systems, vol. 30, 2017.
[35] W. Xiong, D. Liu, X. Shen, C. Fang, and J. Luo, "Unsupervised real-world low-light image enhancement with decoupled networks," arXiv preprint arXiv:2005.02818, 2020.
[36] Y. Zhang, X. Di, B. Zhang, and C. Wang, "Self-supervised image enhancement network: Training with low light images only," arXiv preprint arXiv:2002.11300, 2020.
[37] C. Guo et al., "Zero-reference deep curve estimation for low-light image enhancement," pp. 1780-1789.
[38] S. Zheng and G. Gupta, "Semantic-Guided Zero-Shot Learning for Low-Light Image/Video Enhancement," pp. 581-590.
[39] A. S. Parihar, P. Anand, A. Sharma, and A. Grover, "UndarkGAN: Low-light Image Enhancement with Cycle-consistent Adversarial Networks," pp. 1-7: IEEE.
[40] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," pp. 2223-2232.
[41] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, "Unet++: A nested u-net architecture for medical image segmentation," in Deep learning in medical image analysis and multimodal learning for clinical decision support: Springer, 2018, pp. 3-11.
[42] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," pp. 1125-1134.
[43] Y. Jiang et al., "Enlightengan: Deep light enhancement without paired supervision," IEEE Transactions on Image Processing, vol. 30, pp. 2340-2349, 2021.
[44] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," pp. 234-241: Springer.
[45] M. Singh, R. K. Tiwari, K. Swami, and A. Vijayvargiya, "Detection of glare in night photography," pp. 865-870: IEEE.
[46] A. Mittal, A. K. Moorthy, and A. C. Bovik, "No-reference image quality assessment in the spatial domain," IEEE Transactions on image processing, vol. 21, no. 12, pp. 4695-4708, 2012.
[47] A. Mittal, R. Soundararajan, and A. C. Bovik, "Making a “completely blind” image quality analyzer," IEEE Signal processing letters, vol. 20, no. 3, pp. 209-212, 2012.
[48] N. Venkatanath, D. Praneeth, M. C. Bh, S. S. Channappayya, and S. S. Medasani, "Blind image quality evaluation using perception based features," pp. 1-6: IEEE.
[49] M. Rouf, R. Mantiuk, W. Heidrich, M. Trentacoste, and C. Lau, "Glare encoding of high dynamic range images," pp. 289-296: IEEE.
[50] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "Mobilenetv2: Inverted residuals and linear bottlenecks," pp. 4510-4520.
[51] G. Jocher, Nishimura, K., Mineeva, T., Vilariño, R. (2020, 10 July). YOLOv5. Available: https://github.com/ultralytics/yolov5
[52] C. Ledig et al., "Photo-realistic single image super-resolution using a generative adversarial network," pp. 4681-4690.
[53] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, "Attentive generative adversarial network for raindrop removal from a single image," pp. 2482-2491.
[54] R. Li, J. Pan, Z. Li, and J. Tang, "Single image dehazing via conditional generative adversarial network," pp. 8202-8211.
[55] A. Jolicoeur-Martineau, "The relativistic discriminator: a key element missing from standard GAN," arXiv preprint arXiv:1807.00734, 2018.
[56] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," pp. 2794-2802.
[57] C.-H. Lu and B.-E. Shao, "Environment-aware multiscene image enhancement for internet of things enabled edge cameras," IEEE Systems Journal, vol. 15, no. 3, pp. 3439-3449, 2020.
[58] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," pp. 770-778.
[59] J.-W. Que and C.-H. Lu, "Dynamic and Lightweight Deblurring for Internet-of-Things-enabled Smart Cameras," IEEE Internet of Things Journal, May 2022 (Online Published).
[60] F. Yu et al., "Bdd100k: A diverse driving dataset for heterogeneous multitask learning," pp. 2636-2645.
[61] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.