簡易檢索 / 詳目顯示

研究生: 張佳軒
Chia-Hsuan Chang
論文名稱: 基於卷積神經網路之多曝光影像融合以增強單一影像對比度
Multi-Exposure Image Fusion Based on Convolutional Neural Network to Enhance Single Image Contrast
指導教授: 徐勝均
Sendren Shen-Dong Xu
口試委員: 柯正浩
Cheng-Hao Ko
李俊賢
Jin-Shyan Lee
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 72
中文關鍵詞: 影像對比度增強Retinex演算法多重曝光影像融合卷積神經網路
外文關鍵詞: image contrast enhancement, Retinex algorithm, multi-exposure image fusion, convolutional neural network (CNN)
相關次數: 點閱:431下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • 致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VII 表目錄 IX 第一章 緒論 1.1 研究背景與動機 1.2 研究目的 1.3 方法與貢獻 1.4 論文架構 第二章 影像對比度文獻回顧 2.1 單一影像對比度增強 2.1.1 直方圖均衡化 2.1.2 Retinex演算法 2.2 影像融合使單一影像對比度增強 2.2.1 基於堆疊的高動態範圍 2.2.2 多重曝光影像融合 第三章 基於深度學習的影像對比度增強文獻回顧 3.1 類神經網路介紹 3.1.1 單一神經元運作 3.1.2 激勵函數介紹 3.1.2.1 Sigmoid函數 3.1.2.2 tanh函數 3.1.2.3 ReLU函數 3.1.2.4 Leaky ReLU函數 3.2 卷積神經網路的介紹 3.2.1 卷積層 3.2.2 池化層 3.2.3 全連接層 3.3 卷積神經網路文獻回顧 3.3.1 基於卷積神經網路的影像增強 3.3.2 基於卷積神經網路的影像融合 第四章 系統方法與架構 4.1 系統流程與架構 4.2 訓練資料來源 4.3 分解網路 4.4 調整網路 4.4.1 照明分量調整網路 4.4.2 反射分量調整網路 第五章 實驗設計與結果 5.1 實驗環境 5.2 影像評估標準 5.2.1 峰值信噪比 5.2.2 結構相似性指標 5.3 實驗結果比較 5.3.1 訓練集影像曝光序列 5.3.2 訓練集以外的影像 第六章 結論與來展望 6.1 結論 6.2 未來展望

    [1] K. Nakai, Y. Hoshi, and A. Taguchi, “Color image contrast enhancement method based on differential intensity/saturation gray-levels histograms,” in Proc. Intelligent Signal Processing and Communication Systems (ISPACS), Naha, Japan, November 12-15, 2013, pp. 445- 449.
    [2] E. H. Land and J. J. McCann, ‘‘Lightness and Retinex theory,” Journal of the Optical Society of America, vol. 61, no. 1, pp. 1-11, January, 1971.
    [3] D. J. Jobson, Z. Rahman, and G.A. Woodell ‘‘Properties and performance of a center/surround Retinex,” IEEE Trans. Image Processing, vol. 6, no. 3, pp. 451-462, March, 1997.
    [4] M. Abdullah-Al-Wadud, M. H. Kabir, M. A. A. Dewan, and O. Chae, “A dynamic histogram equalization for image contrast enhancement,” IEEE Trans. Consumer Electronics, vol. 53, no. 2, pp. 593- 600, May, 2007.
    [5] S.-D. Chen and A. R. Ramli, “Preserving brightness in histogram equalization based contrast enhancement techniques,” Digital Signal Processing, vol. 14, no. 5, pp. 413-428, September, 2004.
    [6] T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,” IEEE Trans. Image Processing, vol. 12, no. 20, pp. 3431-3441, December, 2011.
    [7] P. J. Sung, S. J. Woong, and C. N. Ik, “High dynamic range and super-resolution imaging from a single image,” IEEE Access, vol. 6, pp. 10966-10978, January, 2018.
    [8] C.-H. Lee, J.-L. Shih, C.-C. Lien, and C.-C. Han, “Adaptive multiscale Retinex for image contrast enhancement,” in Proc. International Conference on Signal-Image Technology & Internet-Based Systems, Kyoto, Japan, December 2-5, 2013, pp. 43-50.
    [9] S. Wang, J. Zheng, H. M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Processing, vol. 22, no. 9, pp. 3538-3548, 2013.
    [10] X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Processing, vol. 26, no. 2, pp. 982-993, February, 2016.
    [11] A. A. Goshtasby, “Fusion of multi-exposure images,” Image and Vision Computing, vol. 23, no. 6, pp. 611-618, June, 2005.
    [12] L. Pei, Y. Zhao, and H. Luo, “Application of wavelet-based image fusion in image enhancement,” in Proc. International Congress on Image and Signal Processing, Yantai, China, October 16-18, 2010, pp. 649-653.
    [13] C. Morris and R. S. Rajesh, “Survey of spatial domain image fusion techniques,” International Journal of Advanced Research in Computer Science Engineering and Information Technology, vol. 2, no. 3, pp. 249-254, April, 2014.
    [14] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1391–1402, May, 2005.
    [15] S. Ma, Z. Jiang, and T. Zhang, “The improved multi-scale Retinex algorithm and its application in face Recognition,” in Proc. Chinese Control and Decision Conference (2015 CCDC), Qingdao, China, May 23-25, 2015, pp. 5785-5788.
    [16] S. Parthasarathy and P. Sankaran, “An automated multi scale Retinex with color restoration for image enhancement,” in Proc. National Conference on Communications (NCC), Kharagpur, India, February 3-5, 2012, pp. 1-5.
    [17] S. Parthasarathy and P. Sankaran, “Fusion based multi scale Retinex with color restoration for image enhancement,” in Proc. International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, January 10-12, 2012, pp. 1-7.
    [18] N. J. Habeeb, S. H. Omran, and D. A. Radih, “Contrast enhancement for visible-infrared image using image fusion and sharpen filters,” in Proc. International Conference on Advanced Science and Engineering (ICOASE), Duhok, Iraq, October 9-11, 2018, pp. 64-69.
    [19] A. Joshi and V. Chavan, “Image enhancement using transform domain based image fusion technique,” in Proc. National Conference on Recent Trends in Computer Science and Information Technology (NCRTCSIT), Maharashtra, India, February 26-27, 2016, pp. 46-49.
    [20] K. Ma, H. Li, H. Yong, Z. Wang, D. Meng, and L. Zhang, “Robust multi-exposure image fusion: A structural patch decomposition approach,” IEEE Trans. Image Processing, vol. 26, no. 5, pp. 2519-2532, May, 2017.
    [21] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman, “Robust patch-based HDR reconstruction of dynamic scenes,” ACM Trans. Graphics, vol. 31, no. 6, pp. 203:1-203:11, November, 2012.
    [22] A. V. Vanmali, S. G. Kelkar, and V. M. Gadre, “Multi-exposure image fusion for dynamic scenes without ghost effect,” in Proc. Twenty First National Conference on Communications (NCC), Mumbai, India, February 27-March 1, 2015, pp. 1-6.
    [23] T.-H. Oh, J.-Y. Lee, Y.-W. Tai, and I. S. Kweon, “Robust high dynamic range imaging by rank minimization,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 37, no. 6, pp. 1219-1232, October, 2014.
    [24] Y. Liu, D. Wang, J. Zhang, and X. Chen, “A fast fusion method for multi-exposure image in YUV color space,” in Proc. IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, October 12-14, 2018, pp. 1685-1689.
    [25] G. Lou and H. Shi, “Face image recognition based on convolutional neural network,” China Communications, vol. 17, no. 2, pp. 117-124, February, 2020.
    [26] A. Verma, P. Singh, and J. S. R. Alex, “Modified convolutional neural network architecture analysis for facial emotion recognition,” in Proc. International Conference on Systems, Signals and Image Processing (IWSSIP), Osijek, Croatia, June 5-7, 2019, pp.169-173.
    [27] J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Trans. Image Processing, vol. 27, no. 4, pp. 2049-2062, April, 2018.
    [28] P. Swietojanski, A. Ghoshal, and S. Renals, “Convolutional neural networks for distant speech recognition,” IEEE Signal Processing Letters, vol. 21, no. 9, pp. 1120-1124, September, 2014.
    [29] D. Valsesia, G. Fracastoro, and E. Magli, “Image denoising with graph-convolutional neural networks,” in Proc. IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, September 22-25, 2019, pp. 2399-2403.
    [30] C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proc. British Machine Vision Conference, Tyne and Wear, United Kingdom September 3-6, 2018, pp. 127-136.
    [31] J. Bourquin, H. Schmidli, P. V. Hoogevest, and H. Leuenberger, “Basic concepts of artificial neural networks (ANN) modeling in the application to pharmaceutical development,” Journal of Pharmaceutical and Biomedical Analysis, vol. 2, no. 2, pp. 95-109, November, 1996.
    [32] A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and checkerboard artifacts,” accessed on January 23, 2020. [Online]. Available: https://distill.pub/2016/deconv-checkerboard/.
    [33] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Processing, vol. 22, no. 7, pp. 2864-2875, July, 2013.
    [34] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: A simple and practical alternative to high dynamic range photography,” Computer Graphics Forum, vol. 28, no. 1, pp. 161-171, February, 2009.
    [35] J. Hu, O. Gallo, K. Pulli, and X. Sun, “HDR deghosting: How to deal with saturation?,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 23-28, 2013, pp. 1163-1170.
    [36] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” IEEE Signal Processing Letters, vol. 37, no. 12, pp. 2387-2390, December, 2015.
    [37] J. Wang, S. Wang, K. Ma, and Z. Wang, “Perceptual depth quality in distorted stereoscopic images,” IEEE Trans. Image Processsing., vol. 26, no. 3, pp. 1202–1215, March, 2017.
    [38] B. A. Latha and M. B. Reddy, “Image fusion through deep convolutional neural network and Laplacian pyramid,” International Journal of Computer Sciences and Engineering (JCSE), vol. 6, no.3, pp. 403-407, March, 2018.
    [39] S. Raman and S. Chaudhuri, “Bilateral filter based compositing for variable exposure photography,” Eurographics, pp. 1-4, 2009.
    [40] R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Trans. Image Processing, vol. 20, no. 12, pp. 3634-3646, December, 2011.
    [41] W. Zhang and W.-K. Cham, “Gradient-directed multiexposure composition,” IEEE Trans. Image Processing, vol. 21, no. 4, pp. 2318-2323, April, 2012.
    [42] J. Shen, Y. Zhao, S. Yan, and X. Li, “Exposure fusion using boosting Laplacian pyramid,” IEEE Trans. Cybernetics, vol. 44, no. 9, pp. 1579-1590, September, 2014.
    [43] F. Kou, Z. Li, C. Wen, W. Chen, “Multi-scale exposure fusion via gradient domain guided image filtering,” in Proc. IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, July 10-14, 2017, pp. 1105-1110.
    [44] N. D. B. Bruce, “Expoblend: Information preserving exposure blending based on normalized log-domain entropy,” Computers & Graphics, vol. 39, pp. 12-23, April, 2014.
    [45] Photomatrix. commercially-available HDR processing software, accessed on January 16, 2020. [Online]. Available: http://www.hdrsoft.com/.
    [46] H. Hermessi, O. Mourali, and E. Zagrouba, “Convolutional neural network based multimodal image fusion via similarity learning in the Shearlet domain,” Neural Computing and Applications, vol. 30, no. 7, pp. 2029-2045, March, 2018.
    [47] M. Kim, “Improvement of low-light image by convolutional neural network,” in Proc. IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, August 4-7, 2019, pp. 182-192.
    [48] J. Wang and Y. Hu, “An improved enhancement algorithm based on CNN applicable for weak contrast images,” IEEE Access, vol. 8, pp. 8459-8476, January, 2020.
    [49] A. B. A. Qayyum, A. Arefeen, and C. Shahnaz, “Convolutional neural network (CNN) based speech-emotion recognition,” in Proc. IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON), Dhaka, Bangladesh, November 28-30, 2019, pp. 122-125.
    [50] P. Shah, V. Bakrola, “Neural machine translation system of Indic languages - an attention based approach,” in Proc. Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), Gangtok, India, February 25-28, 2019, pp. 1-5.
    [51] Z. Liu, Z. Wang, X. Yin, X. Shi, Y. Guo, and Y. Tian, “Traffic matrix prediction based on deep learning for dynamic traffic engineering,” in Proc. IEEE Symposium on Computers and Communications (ISCC), Barcelona, Spain, June 29- July 3, 2019, pp. 1-7.

    無法下載圖示 全文公開日期 2025/08/13 (校內網路)
    全文公開日期 2025/08/13 (校外網路)
    全文公開日期 2025/08/13 (國家圖書館:臺灣博碩士論文系統)
    QR CODE