簡易檢索 / 詳目顯示

研究生: 劉晏辰
Yen-Chen Liu
論文名稱: 使用深度學習重構弱光成像之視覺內容的研究
A Study of Deep Learning for Visual Content Reconstruction with Low-light Imaging
指導教授: 吳怡樂
Yi-Le Wu
口試委員: 陳建中
Jian-Zhong Chen
唐政元
Zheng-Yuan Tang
閻立剛
Li-Gang Yan
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 36
中文關鍵詞: 弱光成像重構弱光成像
外文關鍵詞: Low-light Imaging, Reconstruction with Low-light Imaging
相關次數: 點閱:194下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於低光子數和低 SNR,在低光下成像具有挑戰性。短曝光圖像會出現噪點,而長時間曝光會導致模糊並且通常不切實際。已經提出了多種去噪、去模糊和增強技術,但它們在極端條件下的效果有限,例如夜間視頻速率成像。為了支持用於低光圖像處理的基於學習的管道的開發,也提出了相關基於全卷積網絡的端到端訓練開發的用於處理低光圖像的管道,並引入了原始短曝光低光圖像的數據集,以及相應的長曝光參考圖像。該網絡直接對原始傳感器數據進行操作,並取代了大部分在此類數據上表現不佳的傳統圖像處理管道。在本文中,我們提出了一個將殘差模塊導入模型的框架,從而提高了現有模型的效率,在實驗中,我們也比較了多種不同網路架構,並分析了這些架構的優缺點。


    Due to the low photon count and low SNR, imaging in low light can be challenging.
    Noise occurs under short exposure, while long exposure causes blurriness and are usually impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effects are limited under extreme conditions, such as night video rate imaging. To support the development of learning-based pipelines for low-light image processing, a related pipeline for processing low-light images developed based on end-to-end training and full convolutional networks has also been proposed, And the original shortexposure and low-exposure pipeline has been introduced. The data set composes of the lowlight images and the corresponding long exposure reference images. The network directly operates on raw sensor data and replaces most of the traditional image processing pipelines that perform poorly on such data. In this work, we propose a framework for importing residual modules into the model, thereby improving the efficiency of the existing model. In the experiments, we also compared a variety of different network architectures and analyzed the advantages and disadvantages of these architectures.

    論文摘要................................................................ 1 Abstract............................................................... 2 LIST OF FIGURES ....................................................... 4 LIST OF TABLES......................................................... 4 Chapter 1. Introduction ............................................... 5 Chapter 2. Related Work................................................ 6 2.1 Low-light Image Enhancement........................................ 6 2.2 Image denoising ................................................... 6 2.3 Noisy Datasets..................................................... 7 2.4 U-Net Convolutional Network........................................ 7 Chapter 3. Method...................................................... 9 3.1 Baseline Architecture ............................................. 9 3.2 Image Feature Extraction...........................................10 3.3 Proposed Architecture .............................................13 Chapter 4. Experiments ................................................14 4.1 Dataset ...........................................................14 4.2 Training...........................................................16 4.3 Peak Signal-to-Noise Ratio (PSNR)..................................17 4.4 Structural Similarity (SSIM) ......................................18 4.5 Controlled Experiments:............................................19 Chapter 5. Conclusions and Future Work ................................31 References ............................................................32

    [1] O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In CSCV, 2015.
    [2] J. Long, E. Shelhamer, T. Darrell. Fully convolutional networks for semantic segmentation . In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014.
    [3] C. Chen, C. Qifeng, X. Jia, and Vladlen Koltun. Learning to See in the Dark. In
    CVPR, 2018.
    [4] X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, W. Meng, and Y. Lu. Fast efficient
    algorithm for enhancement of low lighting video. In IEEE International
    Conference on Multimedia and Expo, 2011.
    [5] X. Guo, Y. Li, and H. Ling. LIME: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2), 2017.
    [6] A. Łoza, D. R. Bull, P. R. Hill, and A. M. Achim. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Digital Signal Processing, 23(6), 2013.
    [7] H. Malm, M. Oskarsson, E. Warrant, P. Clarberg, J. Hasselgren, and C. Lejdfors.
    Adaptive enhancement and noise reduction in very low light-level video. In ICCV, 2007.
    [8] S. Park, S. Yu, B. Moon, S. Ko, and J. Paik. Low-light image enhancement using
    variational optimization-based Retinex model. IEEE Transactions on Consumer Electronics, 63(2), 2017.
    [9] J. Anaya and A. Barbu. RENOIR – A dataset for real low light image noise reduction. arXiv:1409.8230, 2014.
    [10] S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen,
    and M. Levoy. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics, 35(6), 2016.
    [11] Z. Liu, L. Yuan, X. Tang, M. Uyttendaele, and J. Sun. Fast burst images denoising. ACM Transactions on Graphics, 33(6), 2014.
    [12] T. Plotz and S. Roth. Benchmarking denoising algorithms ¨ with real photographs. In CVPR, 2017.
    [13] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12), 2006.
    [14] M. Gharbi, G. Chaurasia, S. Paris, and F. Durand. Deep joint demosaicking and
    denoising. ACM Transactions on Graphics, 35(6), 2016.
    [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A.
    Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
    [16] S. Gu, L. Zhang, W. Zuo, and X. Feng. Weighted nuclear norm minimization with application to image denoising. In CVPR, 2014.
    [17] K. Hirakawa and T. W. Parks. Joint demosaicing and denoising. IEEE Transactions on Image Processing, 15(8), 2006.
    [18] N. Joshi and M. F. Cohen. Seeing Mt. Rainier: Lucky imaging for multi-image
    denoising, sharpening, and haze removal. In ICCP, 2010.
    [19] C. Liu and W. T. Freeman. A high-quality video denoising algorithm based on
    reliable motion estimation. In ECCV, 2010.
    [20] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama. Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics, 23(3), 2004.
    [21] Z. Zhengxin, L. Qingjie, W. Yunhong. Road Extraction by Deep Residual U-Net. In IEEE Geoscience and Remote Sensing Letters, 2017.
    [22] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 2004.
    [23] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CSCV, 2015.
    [24] H. Kaiming, Z. Xiangyu, R. Shaoqing, S. Jian. Identity mappings in deep residual networks. In ECCV, 2016.
    [25] Z. Hu, S. Cho, J. Wang, and M.-H. Yang. Deblurring lowlight images with light
    streaks. In CVPR, 2014.
    [26] T. Remez, O. Litany, R. Giryes, and A. M. Bronstein. Deep convolutional denoising of low-light images. arXiv:1701.01687, 2017.
    [27] X. Zhang, P. Shen, L. Luo, L. Zhang, and J. Song. Enhancement and noise reduction of very low light level images. In ICPR, 2012.
    [28] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, ´ R. Bishop, D. Rueckert,
    and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, 2016.

    QR CODE