簡易檢索 / 詳目顯示

研究生: 蘇彥璋
Yen-Chang - Su
論文名稱: 一種基於對比計算加權引導影像濾波器的背景擷取法
A Novel Background Extraction Method Based on Contrast Measure Guided Image Filter
指導教授: 花凱龍
Kai-Lung Hua
口試委員: 陳永耀
Yun-Gyao Chen
陳建中
Jiann-Jone Chen
簡士哲
Shih-Che Chien
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2016
畢業學年度: 105
語文別: 英文
論文頁數: 52
中文關鍵詞: 加權引導影像濾波器背景截取
外文關鍵詞: Weighted guided image filter
相關次數: 點閱:490下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇提出了一個基於時間相干性和彩色獨特性的背景擷取法。此方法將背景擷取視為一個影像融合(image fusion)的過程,並使用無動量遮罩 (motion-less mask) 擷取以及加權引導影像濾波器這兩種技術。因為背景被假設為靜止的,基 於像素間時間相干性以及色彩顯著性,進行無動量遮罩 (motion-less mask) 擷取。一個改進的加權引導影像濾波器,稱為對比計算加權引導影像濾波器(contrastmeasure weighted guided image filter),被使用在計算每個 frame,基於無動量遮罩以及對比靈敏度的權重圖 (weight map) 上面。引導影像濾波的過程將會平滑權重圖上面的邊緣,以及透過無動量遮罩上的移動的資訊和 frame 上面的紋理資訊來產生更細微的像素權重。透過這些權重圖產生沒有失真的背景圖片。實驗結果表示,比起多數最先進的演算法,此論文提出的方法表現更好。即使影片當中擁有複雜移動的前景物體,此論文提出的方法仍然可以移除所有的前景物體並產生乾淨的背景圖片。


    In this paper, we present a novel method for robust background extraction
    that exploits temporal coherence and color distinctness. The proposed method formulate the background extraction process as a image fusion process and estimate the frame weights based on two kernel techniques: motion-less mask extraction and contrast weighted guided image filter. The improved weighted guided image filter, call contrast measure guided image filter (CMGIF), which is developed for computing the weight map of each frame based on the contrast sensitivity and motion-less mask. The guide filtering process will smooth the edge and generate more detailed pixel weighting on motion-less mask by utilizing the motion information and texture information in motion-less mask and video frame, respectively. The no-artifact background extraction result is obtained based on the weighted maps. In the experiment, our proposed method has better performance when compare to the state-of-art background methods. Even though the test video sequence contain complex foreground movement, our method still can remove all the foreground object and artifacts to obtain the clear background result.

    中文摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.1 General Background Information . . . . . . . . . . . . . . . . . . . .13 1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 1.2.1 Background Extraction Method . . . . . . . . . . . . . . . . . 17 1.2.2 Edge-preserving smoothing techniques . . . . . . . . . . . . . 18 1.3 Guided Image Filtering . . . . . . . . . . . . . . . . . . . . . . . .20 1.4 Paper Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2 Motion-less Mask Extraction . . . . . . . . . . . . . . . . . . . . . 25 2.3 Contrast measure guided image filter . . . . . . . . . . . . . . . . .27 3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . .30 3.0.1 Video Sequences and Ground Truth . . . . . . . . . . . . . . . 30 3.0.2 Results and Comparisons . . . . . . . . . . . . . . . . . . . . 31 4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    1] L. Yang, H. Cheng, J. Su, and X. Li, “Pixel-to-model distance for robust back-
    ground reconstruction,” IEEE Transactions on Circuits and Systems for Video
    Technology, vol. 26, pp. 903–916, May 2016.

    [2] A. Colombari and A. Fusiello, “Patch-based background initialization in heavily
    cluttered video,” IEEE Transactions on Image Processing, vol. 19, pp. 926–933,
    April 2010.

    [3] C.-C. Chen and J. K. Aggarwal, “An adaptive background model initializa-
    tion algorithm with objects moving at different depths,” IEEE International
    Conference on Image Processing, pp. 2664–2667, 2008.

    [4] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for
    real-time tracking,” IEEE Computer Society Conference on Computer Vision
    and Pattern Recognition, vol. 2, p. 252, June 1999.

    [5] Y. Liu and D. A. Pados, “Compressed-sensed-domain l1-pca video surveillance,”
    IEEE Transactions on Multimedia, vol. 18, pp. 351–363, January 2016.

    [6] S. Aqel, A. Aarab, and M. A. Sabri, “Traffic video surveillance: background
    modeling and shadow elimination,” International Conference on Information
    Technology for Organizations Development (IT4OD), March 2016.

    [7] O. Barnich and M. V. Droogenbroeck, “Vibe: A universal background subtrac-
    tion algorithm for video sequences,” IEEE Transactions on Image Processing,
    vol. 20, pp. 1709–1724, June 2011.

    [8] D. K. Yadav, “Efficient method for moving object detection in cluttered
    background using gaussian mixture model,” International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 943–
    948, September 2016.

    [9] R. Kapoor and M. Sushma, “Fast tracking algorithm using modified potential
    function,” Institution of Engineering and Technology Computer Vision, vol. 6,
    pp. 111–120, March 2012.

    [10] S. Kim, D. W. Yang, and H. W. Park, “A disparity-based adaptive multiho-
    mography method for moving target detection based on global motion com-
    pensation,” IEEE Transactions on Circuits and Systems for Video Technology,
    vol. 26, pp. 1407–1420, August 2016.

    [11] Z. Chao, W. Xiaopei, and Z. Jianying, “A moving object detection algorithm
    based on improved gmm and short-term stability measure,” Journal Of Elec-
    tronics and Information Technology, vol. 34, p. 2402, February 2012.

    [12] C.-C. Chiu, M.-Y. Ku, and L.-W. Lian, “A robust object segmentation system
    using a probability-based background extraction algorithm,” IEEE Transac-
    tions on Circuits and Systems for Video Technology, vol. 20, pp. 518–528, April 2010.

    [13] Y. Qin, S. Sun, X. Ma, S. Hu, and B. Lei, “A background extraction and shadow
    removal algorithm based on clustering for vibe,” International Conference on
    Machine Learning and Cybernetics, vol. 1, pp. 52–57, January 2014.

    [14] W.-S. Hur, S.-T. Choi, S.-W. Kim, and S.-W. Seo, “Precise free space detection
    and its application to background extraction,” International Conference on
    Cybernetics and Intel ligent Systems (CIS) and IEEE Conference on Robotics,
    Automation and Mechatronics (RAM), pp. 179–184, September 2015.

    [15] D.-M. Tsai and S.-C. Lai, “Independent component analysis-based background
    subtraction for indoor surveillance,” IEEE Transactions on Image Processing,
    vol. 18, pp. 158–167, January 2009.

    [16] D. Park and H. Byun, “A unified approach to background adaptation and
    initialization in public scenes,” Pattern Recognition, vol. 46, pp. 1985–1997,July 2013.

    [17] B. Gloyer, H. K. Aghajan, K.-Y. Siu, and T. Kailath, “Video-based freeway-
    monitoring system using recursive vehicle tracking,” Image and Video Process-
    ing III, pp. 173–180, 1995.

    [18] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time
    tracking,” IEEE Transactions on Pattern Analysis and Machine Intel ligence,
    vol. 22, pp. 747–757, August 2000.

    [19] D. Baltieri, R. Vezzani, and R. Cucchiara, “Fast background initialization with
    recursive hadamard transform,” IEEE International Conference on Advanced
    Video and Signal Based Surveil lance (AVSS), pp. 165–171, August 2010.

    [20] L. Li, W. Huang, I. Y.-H. Gu, and Q. Tian, “Statistical modeling of complex
    backgrounds for foreground object detection,” IEEE Transactions on Image
    Processing, vol. 13, pp. 1459–1472, November 2004.

    [21] D. Russell and S. Gong, “A highly efficient block-based dynamic background
    model,” IEEE Conference on Advanced Video and Signal Based Surveil lance,
    pp. 417–422, September 2005.

    [22] V. Reddy, C. Sanderson, and B. C. Lovell, “An efficient and robust sequential
    algorithm for background estimation in video surveillance,” IEEE International
    Conference on Image Processing, pp. 1109–1112, 2009.

    [23] V. Reddy, C. Sanderson, and B. C. Lovell, “A low-complexity algorithm for
    static background estimation from cluttered image sequences in surveillance
    contexts,” Journal on Image and Video Processing, vol. 2011, pp. 1–14, April
    2011.

    [24] S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao, “Region-level motion-based foreground
    segmentation under a bayesian network,” IEEE Transactions on Circuits and
    Systems for Video Technology, vol. 19, pp. 522–532, April 2009.

    [25] J. Yang, W. Sun, X. Te, and K. Li, “Background extraction from video se-
    quences via motion-assisted matrix completion,” IEEE International Confer-
    ence on Image Processing, pp. 2437–2441, January 2014.

    [26] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans-
    actions on Image Processing, vol. 22, pp. 2864–2875, July 2013.

    [27] K. He, J. Sun, and X. Tan, “Guided image filtering,” IEEE Transactions on
    Pattern Analysis and Machine Intel ligence, vol. 35, pp. 1397–1409, June 2013.

    [28] Z. Li, J. Zheng, Z. Zhu, W. Yaoand, and S. Wu, “Weighted guided image filter-
    ing,” IEEE Transactions on Image Processing, vol. 24, pp. 120–129, January
    2015.

    [29] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama,
    “Digital photography with flash and no-flash image pairs,” ACM Transactions
    on Graphics (TOG), vol. 23, pp. 664–672, August 2004.

    [30] C. Ttofis, C. Kyrkou, and T. Theocharides, “A low-cost real-time embedded
    stereo vision system for accurate disparity estimation based on guided image
    filtering,” IEEE Transactions on Computers, vol. 65, pp. 2678–2693, September
    2016.

    [31] X. Xu, R. Che, R. Nia, B. He, M. Chen, and A. Lendasse, “Underwater 3d
    object reconstruction with multiple views in video stream via structure from
    motion,” OCEANS 2016 - Shanghai, pp. 1–5, April 2016.

    [32] E. I. Elhefnawy, H. S. Ali, and I. I. Mahmoud, “Effective visibility restora-
    tion and enhancement of air polluted images with high information fidelity,”
    National Radio Science Conference (NRSC), pp. 943–948, February 2016.

    [33] Z. Li and J. Zheng, “Edge-preserving decomposition-based single image haze
    removal,” IEEE Transactions on Image Processing, vol. 24, pp. 5432–5441,
    December 2015.

    [34] D.-W. Jang, H.-S. Kim, C.-D. Jung, and R.-H. Park, “Color fringe correction
    based on image fusion,” IEEE International Conference on Image Processing
    (ICIP), pp. 1817–1821, October 2014.

    [35] S. Joshi, K. P. Upla, and M. V. Joshi, “Multi-resolution image fusion using
    multistage guided filter,” National Conference on Computer Vision, Pattern
    Recognition, Image Processing and Graphics (NCVPRIPG), pp. 1–4, December
    2013.

    [36] J. Duan, L. Chen, and C. P. Chen, “Region-based multi-focus image fusion
    using guided filtering and greedy analysis,” IEEE International Conference on
    Systems, Man, and Cybernetics (SMC), pp. 2932–2937, October 2015.

    [37] “Background initialization website.”

    [38] “Scene background initialization (SBI) dataset.”

    [39] L. Maddalena and A. Petrosino, “Towars benchmarking scene background ini-tialization,” New Trends in Image Analysis and Processing, pp. 469–476, June 2015.

    [40] M. Kumar, P. Ramuhalli, and S. Dass, “A total variation-based algorithm for
    pixel-level image fusion,” Image Processing, IEEE Transactions on, vol. 18,
    p. 2137–2143, September 2009.

    [41] R. Marie, A. Potelle, and E. M. Mouaddib, “Dynamic background subtrac-
    tion using moments,” IEEE International Conference on Image Processing,
    pp. 2369–2372, September 2011.

    [42] D.-M. Tsai and S.-C. Lai, “Independent component analysis-based background
    subtraction for indoor surveillance,” IEEE Transactions on Image Processing,
    vol. 18, pp. 158–167, January 2009.

    [43] P. Soille, “Morphological image analysis: Principles and applications,”
    Springer.

    [44] W. Long and Y.-H. Yang, “Stationary background generation: an alternative to the difference of two images,” Pattern Recognition, vol. 23, pp. 1351–1359,1990.

    [45] K.-L. Hua, H.-C. Wang, A. H. Rusdi, and S.-Y. Jiang, “A novel multi-focus
    image fusion algorithm based on random walks,” J. Vis. Commun, Image Rep-
    resent, 2014.

    [46] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallfcolombari2010er:
    Principles and practice of background maintenance,” IEEE International Con-
    ference on Computer Vision, vol. 1, pp. 255–261, 1999.

    [47] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Detecting moving objects,ghosts and shadows in video streams,” IEEE Transactions on Pattern Analysis and Machine Intel ligence, vol. 25, pp. 1337–1342, October 2003.

    [48] L. Li, P. Wang, Q. Hu, and S. Cai, “Efficient background modeling based
    on sparse representation and outlier iterative removal,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, pp. 278–289, February 2016.

    [49] S. R. Sawalakhe, K. K. Kathalkar, M. P. Giri, N. B. Bhawarkar, S. R.
    Sawalakhe, I. G. Deshmukh, and A. S. Joshi, “A novel background extraction
    algorithm for large motion video,” Conference on Power, Control, Communi-
    cation and Computational Technologies for Sustainable Growth (PCCCTSG),
    pp. 254–260, December 2015.

    [50] Z. Wei, P. Li, and HuangYue, “A foreground-background segmentation algo-rithm for video sequences,” International Symposium on Distributed Computingand Applications for Business Engineering and Science, pp. 340–343, March 2015.

    [51] S. Mahmoudpour and M. Kim, “Robust foreground detection in sudden illumi-nation change,” Electronics Letters, vol. 52, pp. 441–443, March 2016.

    [52] H.-Y. Wang, L.-H. Wang, and C.-B. Wu, “An efficient background extraction and object segmentation algorithm for realtime applications,” IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), pp. 659–662, December 2012.

    [53] L. Yang, H. Cheng, J. Su, and X. Chen, “Pixel-to-model background modeling in crowded scenes,” IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6, July 2014.

    [54] Z. Jingyun, D. Yifan, Y. Yi, and S. Jiasong, “Real-time defog model based on visible and near-infrared information,” 2016 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pp. 1–6, July 2016.

    [55] Z. Li, K. Wang, W. Zuo, D. Meng, and L. Zhang, “Detail-preserving and
    content-aware variational multi-view stereo reconstruction,” IEEE Transactions on Image Processing, vol. 25, pp. 864–877, February 2016.

    [56] E. I. Elhefnawy, H. S. Ali, and I. I. Mahmoud, “Effective visibility restoration and enhancement of air polluted images with high information fidelity,”National Radio Science Conference (NRSC), pp. 943–948, February 2016.

    [57] A. Sharma, V. Bhateja, and A. K. Sinha, “Synthesis of flash and no-flash image pairs using guided image filtering,” Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on, pp. 768–773, February 2015.

    [58] H. Shi, L. Chen, Z. hong Mai, H. Chen, and F. kun Bi, “The guided image filtering for speckle reduction in sar images,” IET International Radar Conference 2015, pp. 1–4, October 2015.

    [59] F. Zeev, F. Raanan, L. Dani, and S. Richard, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graph., vol. 27,pp. 67:1–67:10, October 2008.

    [60] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Trans. Graph., vol. 21, pp. 257–266, July 2002.

    [61] L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via l0 gradient minimization,” ACM Trans. Graph., vol. 30, pp. 174:1–174:12, December 2011.

    QR CODE