簡易檢索 / 詳目顯示

研究生: 林穎傑
Ying-Jie Lin
論文名稱: Fast Image Completion Algorithm Based on Local Similarity
利用局部相似關係之快速影像修補演算法
指導教授: 林昌鴻
Chang Hong Lin
口試委員: 陳維美
Wei-Mei Chen
呂政修
Jenq-Shiou Leu
沈中安
Chung-An Shen
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 英文
論文頁數: 44
中文關鍵詞: 影像修補影像恢復局部相似
外文關鍵詞: Image Completion, Image Inpainting, Texture Synthesis
相關次數: 點閱:444下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 影像修補(Image completion)廣泛用於老舊藝術品數位化後的破損修復,相片或影片中不想要的物件移除並修補,目的在使破損影像修補後的結果沒有明顯的失真(Distortion)與人造感(Artefact)。本篇論文提出一種利用局部相似關係(Local Similarity)的快速影像修補演算法。首先我們利用局部相似來確認待修補影像的水平或垂直關係,再利用得到的關係去進行破損影像的色彩修補,最後再計算並調整破損區域影像的新亮度,使得調整過後的修補影像結果符合人眼視覺。實驗結果與先前的方法比較起來,我們的演算法速度快速且不會有影像修補常見的影像不連續所造成的造假感出現。


    Image completion is widely used in artwork restoration, unwanted-object removal
    and hole-filling. It is the process of restoring large removed objects from digital images
    in a visually satisfactory way. In this article, we present a fast texture synthesis and
    image completion method based on local similarity. Firstly, we use the characteristic of
    local similarity to determine the horizontal or vertical symmetry of image, and restore
    damaged regions accordingly. After all damaged areas have been filled, we adjust the
    luminance in a visually reasonable way. Compare to previous arts, the proposed method
    either has a better visual quality with similar speed, or achieves a similar visual quality
    with much shorter processing time. Experimental results demonstrate that the proposed
    approach propagates appropriate color and luminance distribution into missing regions
    to make a visually plausible image without noticeable artifacts.

    摘要 ……………………………………………………………………………….I Abstract ……………………………………………………………………………II 致謝 ……………………………………………………………………………III List of Contents ……………………………………………………………………IV List of Figures ……………………………………………………………………..V List of Tables ……………………………………………………………………..VIII Chapter 1 INTRODUCTION ……………………………………………………...1 1.1 Motivation ………………………………………………………………..1 1.2 Contributions ………………………………..……………………………2 1.3 Thesis organization....……………………………………………………..2 Chapter 2 RELATED WORKS ......………………………………………………..3 2.1 Diffusion-based methods...………………………………………………..3 2.2 Exemplar-based methods...….…………………………….………………6 2.3 Local similarity-based methods ….……………………….………………9 Chapter 3 PROPOSED METHODS………………………………….…………... 11 3.1 Local similarity...……………...…………………………….…….…….. 12 3.2 Dynamic two-side filling …...………………………………..…………. 14 3.3 Luminance gradation...………...…………………………….…….……. 17 3.4 Process of remaining unknown region.……………………….………….18 Chapter 4 DISCUSSIONS OF RESULTS……………………………….………. 19 4.1 Comparisons to other works ………...……...…………………….…….. 19 4.2 Analysis of luminance distribution ...……………...……………………..27 4.3 Analysis of computation time ...…...……………...…………...……..…..38 Chapter 5 CONCLUSIONS..…………………………………………...…….........39 REFERENCES ....………………………………………………………................ 41

    [1] M. Bertalmio, G. Sapiro, V. Caselles and C. Ballester, “Image inpainting,” Annual Conference on Computer graphics and interactive techniques, 2000, pp. 417-424.
    [2] A. Bugeau, M. Bertalmio, V. Caselles and G. Sapiro, “A comprehensive framework for image inpainting,” IEEE International Conference on Image Processing, 2010, pp. 2634-2645.
    [3] M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp. 355–362.
    [4] T. F. Chan and J. Shen, “Nontexture inpainting by curvature-driven diffusions,” Journal of Visual Communication and Image Representation, 2001, pp. 436-449.
    [5] A. Levin, A. Zomet and Y. Weiss, “Learning how to inpaint from global image statistics,” IEEE International Conference on Computer Vision, 2001, pp. 305-312.
    [6] J. Gu, S. Peng and X. Wang, “Digital image inpainting using Monte Carlo method,” IEEE International Conference on Image Processing, 2004, pp. 961-964.
    [7] A. Telea, “An image inpainting technique based on the fast marching method,” Journal of graphics tools, 2004, pp. 23-34.
    [8] Z. Wang, F. Zhou and F. Qi, “Inpainting thick image regions using isophote propagation,” IEEE International Conference on Image Processing, 2006, pp. 689-692.
    [9] F. Bornemann and T. Marz, “Fast image inpainting based on coherance transport,” Journal of Mathematical Imaging and Vision, 2007, pp. 259-278.
    [10] F. Bornemann and T. Marz, “Image inpainting based on coherence transport with adapted distance functions,” Society for Industrial and Applied Mathematics Journal of Imaging Sciences, 2010, pp. 981-1000.
    [11] R. Biradar and V. Kohir, “A novel image inpainting technique based on median diffusion,” Sadhana , 2013, 621-644.
    [12] S. Li and H. Wang, “Image inpainting using curvature-driven diffusions based on P-Laplace operator,” International Conference Innovation Computing, Information, and Control, 2009, pp. 323-325.
    [13] L. Li and H. Yu, “Nonlocal curvature-driven diffusion model for image inpainting,” IEEE International Conference Information Assurance and Security, 2009, pp. 513-516.
    [14] Z. Xu, X. Lian and L. Feng, “Image inpainting algorithm based on partial differential equation,” International Colloquium on Computing, Communication, Control, and Management , 2008, pp. 120-124.
    [15] A. Criminisi, P. Perez and K. Toyama, “Region Filling and object removal by examplar-based inpainting,” IEEE Transaction Image Process, 2004, pp.1200-1212.
    [16] T. Dang, M. Larabi and A. Beghdadi, “Multi-resolution patch and window-based priority for digital image inpainting problem,” International Conference Image Processing Theory, Tools and Appllication, 2012, pp. 280-284.
    [17] T. Dang, A. Beghdadi and M. Larabi, “A perceptual image completion approach based on a hierarchical optimization scheme,” Signal Process, 2014, pp. 127-141.
    [18] H. Zhang, Y. Jin and Y. Wu, “Image completion by a fast and adaptive exemplar-based image inpainting,” IEEE International Conference Computer Application and System Modeling, 2010, pp. 115-119.
    [19] T. Kwok, H. Sheung and C. Wang, “Fast query for exemplar-based image completion,” IEEE Transaction Image Process, 2010, pp. 3106-3115.
    [20] O. Le Meur, M. Ebdelli and C. Guillemot, “Hierarchical super-resolution based inpainting,” IEEE Transaction Image Process, 2013, pp. 3779-3790.
    [21] J. Liu, S. Zhang, W. Yang and H. Li, “A fast image inpainting method based on hybrid simiarty-distance,” IEEE International Conference Pattern Recognition, 2010, pp. 4432-4435.
    [22] A. Rares, M. Reinders and J. Biemond, “Edge based image restoration,” IEEE Transaction Image Process, 2005. pp. 1454-1468.
    [23] J. Wang, K. Lu, D. Pan, N. He and B. Bao, “Robust object removal with an exemplarbased image inpainting approach,” Neurocomputing, 2014, pp. 150-155.
    [24] J. Wu and Q. Ruan, “Object removal by cross isophotes exemplar based image inpainting,” IEEE International Conference Pattern Recognition, 2006, pp. 810-813.
    [25] Z. Xu and J. Sun, “Image inpainting by patch propagation using patch sparsity,” IEEE Transaction Image Process, 2010, pp.1153-1165.
    [26] J. Zhou and A. Kelly, “Image inpainting based on local optimisation,” IEEE International Conference Pattern Recognition, 2010, pp. 4440-4443.
    [27] H. Zhou and J. Zheng, “Adaptive patch size determination for patch-based image completion,” IEEE International Conference Image Processing, 2010, pp. 421-424.
    [28] A. Sai Hareesh and V. Chandrasekaran, “A fast and simple gradient function guided filling order prioritization for exemplar-based color image inpainting,” IEEE International Conference Image Processing, 2010, pp. 409-412.
    [29] S. Zarif, I. Faye and D. Rohaya , “Fast and Efficient Multichannel Image Completion Using Local Similarity” , IEEE International Conference on Pattern Recognition, 2012, pp. 3116-3119.
    [30] S. Zarif, I. Faye and D. Rohaya, “Static Object Removal from Video Scene Using Local Similarity,” IEEE International Colloquium on Signal Processing and its Applications, 2013, pp.54-57.
    [31] S. Zarif, I. Faye and D. Rohaya, “Video Completion Method for Static Object Removal Based on GLCM,” IEEE International Colloquium on Signal Processing and its Applications, 2015, pp.57-62.
    [32] J. L. Rodgers and W. A. Nicewander. “Thirteen ways to look at the correlation coefficient,” The American Statistician, 1988, pp. 59-66.
    [33] F. Yates. “Contingency table involving small numbers and the χ2 test”, Supplement to the Journal of the Royal Statistical Society, 1934, pp. 217-235.
    [34] G. Bradski, A. Kaehler, “Learning OpenCV: Computer Vision with the OpenCV Library,” O'Reilly Media, Inc., Sebastopol, 2008, pp.202.
    [35] A. Bhattacharyya, “On a measure of divergence between two statistical populations defined by their probability distributions”, Bulletin of the Calcutta Mathematical Society 35, 1943, pp. 99-109.

    QR CODE