簡易檢索 / 詳目顯示

研究生: 何芳琳
Fang-Lin He
論文名稱: 基於共同字典學習之去馬賽克演算法
Context-Aware Joint Dictionary Learning for Color Image Demosaicking
指導教授: 花凱龍
Kai-Lung Hua
口試委員: 賴祐吉
Yu-Chi Lai
王鈺強
Yu-Chiang Frank Wang
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 中文
論文頁數: 39
中文關鍵詞: 影像處理機器學習字典學習去馬賽克演算法
外文關鍵詞: image processing, machine learning, dictionary learning, demosaicking
相關次數: 點閱:212下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 大部份的消費型數位相機使用單一感光元件,並以色彩濾鏡陣列(Color Filter Array, CFA)覆蓋,因此每個位置只有三原色其中之一的像素值資訊,此類不完整的色彩影像稱為馬賽克影像(Mosaicked Image);成像時,將馬賽克影像還原成完整彩色影像的方法,稱之為去馬賽克演算法(Demosaicking Algorithms)。我們在本論文提出一基於共同字典學習的去馬賽克演算法,給一馬賽克影像,我們先針對不同的色彩及紋理特徵分類,並學習各自的共同字(Joint Dictionary),再利用共同稀疏表示法(Joint Sparse Representation)來預測真實彩色影像中遺失的色彩資訊。在學習字典及搜尋稀疏表示法的過程中,我們的演算法利用局部限制(Locality Constraint),來找到影像在字典中最有相關性的資訊。實驗結果證明,我們提出的方法在主觀及客觀的比較中,優於現有或最新的方法。


    Most digital cameras are overlaid with color filter arrays (CFA) on their electronic sensors, and thus only one particular color value would be captured at every pixel location. When producing the output image, one needs to recover the full color image from such incomplete color samples, and this process is known as demosaicking. In this paper, we propose a novel context-constrained demosaicking algorithm via sparse-representation based joint dictionary learning. Given a single mosaicked image with incomplete color samples, we perform color and texture constrained image segmentation and learn a dictionary with different context categories. A joint sparse representation is employed on different image components for predicting the missing color information in the resulting high-resolution image. During the dictionary learning and sparse coding processes, we advocate a locality constraint in our algorithm, which allows us to locate most relevant image data and thus achieve improved demosaicking performance. Experimental results show that the proposed method outperforms several existing or state-of-the-art techniques in terms of both subjective and objective evaluations.

    1 Introduction 2 A Brief Review of Sparse Representation 3 Image Demosaicking via Locality-Sensitive Joint Dictionary Learning 3.1 Problem Setting 3.2 Context-Constrained Image Segmentation and Categorization 3.3 Our Self-Learning Strategy for Image Demosaicking 3.4 Self-Learning of Locality-Sensitive Joint Dictionary 3.4.1 Sparse Coding for Updating A 3.4.2 Dictionary Update of D 3.5 Predicting the Demosaicked Output 4 Experimental Results 5 Conclusions References

    [1] B. Bayer, “Color imaging array,” tech. rep., U.S. Patent 3971065, 1976.
    [2] J. S. Ho, O. C. Au, J. Zhou, and Y. Guo, “Inter-channel demosaicking traces for digital image forensics,” in IEEE Int. Conf. on Multimedia and Expo, pp. 1475–1480, 2010.
    [3] K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process., vol. 14, no. 3, pp. 360–369, 2005.
    [4] B. K. Gunturk, Y. Altunbasak, and R. M. Mersereau, “Color plane interpolation using alternating projections,” IEEE Trans. Image Process., vol. 11, no. 9, pp. 997–1013, 2002.
    [5] I. Pekkucuksen and Y. Altunbasak, “Edge strength filter based color filter array interpolation,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 393–397, 2012.
    [6] A. Horé and D. Ziou, “An edge-sensing generic demosaicing algorithm with application to image resampling,” IEEE Trans. Image Process., vol. 20, no. 11, pp. 3136–3150, 2011.
    [7] Y. Itoh, “Similarity-based demosaicing algorithm using unified high-frequency map,” IEEE Trans, Consum. Electron., vol. 57, no. 2, pp. 597–605, 2011.
    [8] I. Pekkucuksen and Y. Altunbasak, “Edge strength filter based color filter array interpolation,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 393–397, 2012.
    [9] I. Pekkucuksen and Y. Altunbasak, “Edge oriented directional color filter array interpolation,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 993–996, 2011.
    [10] F. Zhang, X. Wu, X. Yang, W. Zhang, and L. Zhang, “Robust color demosaicking with adaptation to varying spectral correlations,” IEEE Trans. Image Process., vol. 18, no. 12, pp. 2706–2717, 2009.
    [11] X. Wu, D. Gao, G. Shi, and D. Liu, “Color demosaicking with sparse representations,” in IEEE Int. Conf. on Image Processing, 2010.
    [12] D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in IEEE Int. Conf. on Computer Vision, 2009.
    [13] C.-Y. Yang, J.-B. Huang, and M.-H. Yang, “Exploiting self-similarities for single frame super-resolution,” in Asian Conf. of Computer Vision, 2010.
    [14] M.-C. Yang and Y.-C. F. Wang, “A self-learning approach to single image super-resolution,” IEEE Trans. on Multimedia, vol. 15, no. 3, pp. 498–508, 2013.
    [15] H. S. Malvar, L. wei He, and R. Cutler, “High-quality linear interpolation for demosaicing of bayer-patterned color images,” in IEEE Int. Conf. on Acoustic, Speech, and Signal Processing, 2004.
    [16] R. Lukac and K. N. Plataniotis, “Universal demosaicking for imaging pipelines with an RGB color filter array,” Pattern Recognition, vol. 38, no. 11, pp. 2208–2212, 2005.
    [17] A. Moghadam, M. Aghagolzadeh, M. Kumar, and H. Radha, “Compressive framework for demosaicing of natural images,” IEEE Trans. Image Process., vol. 22, no. 6, pp. 2356–2371, 2013.
    [18] M. Elad, M. Figueiredo, and Y. Ma, “On the role of sparse and redundant representations in image processing,” Proceedings of the IEEE, vol. 98, no. 6, pp. 972–982, 2010.
    [19] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan, “Sparse representation for computer vision and pattern recognition,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1031–1044, 2010.
    [20] A. Rehman, Z. Wang, D. Brunet, and E. R. Vrscay, “SSIM-inspired image denoising using sparse representations,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 1121–1124, 2011.
    [21] J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process., vol. 19, no. 11, pp. 2861–2873, 2010.
    [22] J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1570–1582, 2005.
    [23] B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM J. Comput., vol. 24, no. 2, pp. 227–234, 1995.
    [24] D. L. Donoho and Y. Tsaig, “Fast solution of ℓ1-norm minimization problems when the solution may be sparse,” IEEE Trans. Inform. Theory, vol. 54, no. 11, pp. 4789–4812, 2008.
    [25] J. A. Tropp, “Greed is good: algorithmic results for sparse approximation,” IEEE Trans. Inform. Theory, vol. 50, no. 10, pp. 2231–2242, 2004.
    [26] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. Ser. B, vol. 58, no. 1, pp. 267–288, 1996.
    [27] J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 948–958, 2010.
    [28] M.-C. Yang, C.-H. Wang, T.-Y. Hu, and Y.-C. Wang, “Learning context-aware sparse representation for single image super-resolution,” in IEEE Int. Conf. on Image Processing, 2011.
    [29] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 888–905, 1997.
    [30] Y. Huang, “Adaptive demosaicking using multiple neural networks,” in IEEE Workshop on Machine Learning for Signal Processing, pp. 353–357, 2006.
    [31] H. Siddiqui and H. Hwang, “Training-based demosaicing,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 1034–1037, 2010.
    [32] C. E. Duchon, “Lanczos filtering in one and two dimensions,” J. Appl. Meteor., vol. 18, pp. 1016–1022, Aug. 1979.
    [33] J. Wang, J. Yang, K. Yu, F. Lv, T. S. Huang, and Y. Gong, “Locality-constrained linear coding for image classification,” in IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3360–3367, 2010.
    [34] C.-P. Wei, Y.-W. Chao, Y.-R. Yeh, and Y.-C. F. Wang, “Locality-sensitive dictionary learning for sparse representation based classification,” Pattern Recognition, vol. 46, no. 5, pp. 1277–1287, 2013.
    [35] K. Yu, T. Zhang, and Y. Gong, “Nonlinear learning using local coordinate coding,” in Advances in Neural Information Processing Systems 22, pp. 2223–2231, 2009.
    [36] E. Dubois, “Frequency domain methods for demosaicking of bayer-sampled color images,” 2005. [Online; accessed 18-Feb-2014].
    [37] A. Moghadam, M. Aghagolzadeh, M. Kumar, and H. Radha, “Compressive demosaicing,” 2013. [Online; accessed 18-Feb-2014].

    無法下載圖示 全文公開日期 2020/02/04 (校內網路)
    全文公開日期 2025/02/04 (校外網路)
    全文公開日期 2025/02/04 (國家圖書館:臺灣博碩士論文系統)
    QR CODE