簡易檢索 / 詳目顯示

研究生: 陳威仰
Wei-Yang Chen
論文名稱: 基於多模型深度全卷積網路之適應性去馬賽克方法
DeepDemosaicking: Adaptive Image Demosaicking via Multiple Deep Fully Convolutional Networks
指導教授: 花凱龍
Kai-Lung Hua
口試委員: 花凱龍
Kai-Lung Hua
陳永耀
Yung-Yao Chen
鍾國亮
Kuo-Liang Chung
郭景明
Jing-Ming Guo
鄭文皇
Wen-Huang Cheng
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 英文
論文頁數: 39
中文關鍵詞: 去馬賽克深度卷積網路多模型融合
外文關鍵詞: Image demosaicking, deep convolutional networks, multi-model fusion
相關次數: 點閱:191下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 卷積神經網路近年被用於解決許多電腦視覺暨影像處理問題,其深度的架構可以從輸入影像中提取影像的低階特徵至高階特徵,以提升效能。本論文中,我們提出一個基於深度卷積神經網路之適應性去馬賽克方法。不同於從馬賽克影像直接產生最終之去馬賽克結果,本文方法將去馬賽克流程分為初步去馬賽克以及影像增強兩階段。初步去馬賽克目標在於快速地產生約略的去馬賽克結果以利於後續的卷積運算,其結果一般會包含錯誤的色彩資訊。影像增強階段,則使用深度卷積網路,預測初步結果與正確色彩影像的差值,來補償上一步驟的色彩錯誤。此外,我們也會利用訓練多個網路模型的方式,進一步提升整體的方法效能。論文實驗的部分,我們將本論文所提出的方法和許多新穎暨經典的方法比較,實驗結果驗證本論文的方法不論在主觀的視覺表現或是客觀的數值評比上,都比其他方法還要好。


    Convolutional neural networks are currently the state-of-the-art solution for a wide range of image processing tasks. Their deep architecture extracts low and high-level features from images, thus, improving the model's performance. In this thesis, we propose a method for image demosaicking based on deep convolutional neural networks. Demosaicking is the task of reproducing full color images from incomplete images formed from overlaid color filter arrays on image sensors found in digital cameras. Instead of producing the output image directly, the proposed method divides the demosaicking task into an initial demosaicking step and a refinement step. The initial step produces a rough demosaicked image containing unwanted color artifacts. The refinement step then reduces these color artifacts using deep residual estimation and multi-model fusion producing a higher quality image. Experimental results show that the proposed method outperforms several existing and state-of-the-art methods in terms of both subjective and objective evaluations.

    Abstract in Chinese.................................. i Abstract in English .................................. ii Acknowledgements.................................. iii Contents........................................ iv ListofFigures..................................... v ListofTables ..................................... viii 1 Introduction.................................... 1 2 RelatedWorks................................... 3 3 ProposedDeepDemosaickingMethod ...................... 4 3.1 InitialDemosaicking ............................ 5 3.2 DeepDemosaickingNetworkArchitecture. . . . . . . . . . . . . . . . . 7 3.3 Multi-model Training............................ 11 3.4 WeightedDoubleInterpolationFusion................... 14 4 Experimental Results ............................... 18 4.1 ImplementationDetails........................... 18 4.1.1 AblationExperiments ....................... 20 4.2 ComparisonwithState-of-the-art algorithms . . . . . . . . . . . . . . . 23 5 Conclusions.................................... 35 References....................................... 36

    [1] B. Bayer, “Color imaging array.” Patent, 1976. US 3971065.
    [2] H. S. Malvar, L. He, and R. Cutler, “High-quality linear interpolation for demosaicing of bayer-patterned color images,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, May 17-21, pp. 485–488, 2004.
    [3] L.ZhangandX.Wu,“Color demosaicking via directional linear minimum mean square-error estimation,” IEEE Trans. Image Processing, vol. 14, no. 12, pp. 2167–2178, 2005.
    [4] I. Pekkucuksen and Y. Altunbasak, “Gradient based threshold free color filter array interpolation,” in Proceedings of the International Conference on Image Processing, ICIP, September 26-29, pp. 137– 140, 2010.
    [5] L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electronic Imaging, vol. 20, no. 2, p. 023016, 2011.
    [6] F. He, Y. F. Wang, and K. Hua, “Self-learning approach to color demosaicking via support vector regression,” in 19th IEEE International Conference on Image Processing, ICIP, September 30 - October 3, pp. 2765–2768, 2012.
    [7] Y. M. Lu, M. Karzand, and M. Vetterli, “Demosaicking by alternating projections: Theory and fast one-step implementation,” IEEE Trans. Image Processing, vol. 19, no. 8, pp. 2085–2098, 2010.
    [8] Y. Wang, “A multilayer neural network for image demosaicking,” in IEEE International Conference on Image Processing, ICIP, October 27-30, pp. 1852–1856, 2014.
    [9] J. Duran and A. Buades, “A demosaicking algorithm with adaptive inter-channel correlation,” IPOL Journal, vol. 5, pp. 311–327, 2015.
    [10] D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing, ICIP, September 15-18, pp. 2304–2308, 2013.
    [11] Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in IEEE International Conference on Image Processing, ICIP, September 27-30, pp. 3861–3865, 2015.
    [12] D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Minimized-laplacian residual interpolation for color image demosaicking,” in Digital Photography X, part of the IS&T-SPIE Electronic Imaging Symposium, San Francisco, California, USA, February 2, Proceedings., p. 90230L, 2014.
    [13] D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Beyond color difference: Residual interpolation for color image demosaicking,” IEEE Trans. Image Processing, vol. 25, no. 3, pp. 1288–1300, 2016.
    [14] W. Ye and K. Ma, “Color image demosaicing using iterative residual interpolation,” IEEE Trans. Image Processing, vol. 24, no. 12, pp. 5879–5891, 2015.
    [15] K. Hua, S. C. Hidayati, F. He, C. Wei, and Y. F. Wang, “Context-aware joint dictionary learning for color image demosaicking,” J. Visual Communication and Image Representation, vol. 38, pp. 230– 245, 2016.
    [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems. Proceedings of a meeting held December 3-6, Lake Tahoe, Nevada, United States., pp. 1106–1114, 2012.
    [17] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in Computer Vision - ECCV - 14th European Conference, Amsterdam, The Netherlands, October 11-14, pp. 649–666, 2016.
    [18] S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” ACM Trans. Graph., vol. 35, no. 4, p. 110, 2016.
    [19] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR, June 7-12, pp. 3431–3440, 2015.
    [20] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in 2015 IEEE International Conference on Computer Vision, ICCV, December 7-13, pp. 1520–1528, 2015.
    [21] S. Xie and Z. Tu, “Holistically-nested edge detection,” in 2015 IEEE International Conference on Computer Vision, ICCV, December 7-13, pp. 1395–1403, 2015.
    [22] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, “Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pp. 3982–3991, 2015.
    [23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
    [24] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Computer Vision - ECCV - 13th European Conference, Zurich, Switzerland, September 6-12, pp. 184–199, 2014.
    [25] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
    [26] J. E. A. Jr. and J. F. H. Jr., “Adaptive color plane interpolation in single sensor color electronic camera.” Patent, 1997. US 5652621.
    [27] D. R. Cok, “Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal.” Patent, 1987. US 4642678.
    [28] R. Lukac and K. N. Plataniotis, “A normalized model for color-ratio based demosaicking schemes,” in Proceedings of the 2004 International Conference on Image Processing, ICIP, October 24-27, pp. 1657–1660, 2004.
    [29] R. Kimmel, “Demosaicing: image reconstruction from color CCD samples,” IEEE Trans. Image Pro- cessing, vol. 8, no. 9, pp. 1221–1228, 1999.
    [30] S.-C. Pei and I.-K. Tam, “Effective color interpolation in CCD color filter arrays using signal correlation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 503–513, June 2003.
    [31] C. Su, “Highly effective iterative demosaicing using weighted-edge and color-difference interpolations,” IEEE Trans. Consumer Electronics, vol. 52, no. 2, pp. 639–645, 2006.
    [32] K. H. Chung and Y. H. Chan, “Enhanced integrated gradient and its applications to color demosaic- ing,” in 2012 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2012), pp. 378–383, Aug 2012.
    [33] K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Processing, vol. 14, no. 3, pp. 360–369, 2005.
    [34] E.Dubois,“Frequency-domain methods for demosaicking of bayer-sampled color images,”IEEESignal Processing Lett, pp. 847–850, 2005.
    [35] D. Alleysson, S. Süsstrunk, and J. Hérault, “Linear demosaicing inspired by the human visual system,” IEEE Trans. Image Processing, vol. 14, no. 4, pp. 439–449, 2005.
    [36] E. Dubois, “Filter design for adaptive frequency-domain bayer demosaicking,” in Proceedings of the International Conference on Image Processing, ICIP, October 8-11, pp. 2705–2708, 2006.
    [37] J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Trans. Image Processing, vol. 17, no. 1, pp. 53–69, 2008.
    [38] J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in IEEE 12th International Conference on Computer Vision, ICCV, September 27 - Oc- tober 4, pp. 2272–2279, 2009.
    [39] J. Wu, R. Timofte, and L. J. V. Gool, “Demosaicing based on directional difference regression and efficient regression priors,” IEEE Trans. Image Processing, vol. 25, no. 8, pp. 3862–3874, 2016.
    [40] Y.-N. Liu, Y.-C. Lin, and S. Y. Chien, “A no -reference quality evaluation method for CFA demosaicking,” in 2010 Digest of Technical Papers International Conference on Consumer Electronics (ICCE), Jan 2010.
    [41] T. Wang, Y. Liu, and S. Chien, “Color filter array demosaicking using self-validation framework,” in Proceedings of the 2012 IEEE International Conference on Multimedia and Expo, ICME, pp. 604–609, 2012.
    [42] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the ACM International Conference on Multimedia, MM, November 03 - 07, pp. 675–678, 2014.
    [43] J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.
    [44] Arbelaez, Pablo, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 5, pp. 898–916, 2011.
    [45] S. Andriani, H. Brendel, T. Seybold, and J. Goldstone, “Beyond the kodak image set: A new reference set of color image sequences,” in IEEE International Conference on Image Processing, ICIP, Melbourne, Australia, September 15-18, pp. 2289–2293, 2013.
    [46] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, et al., “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983.
    [47] L. Bottou, “Stochastic gradient descent tricks,” in Neural Networks: Tricks of the Trade - Second Edition, pp. 421–436, 2012.

    無法下載圖示 全文公開日期 2022/07/21 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE