簡易檢索 / 詳目顯示

研究生: 鄭欽允
Chin-Yun - Cheng
論文名稱: 一種基於混合高斯條件隨機場的背景擷取法
Background Extraction Based on Joint Gaussian Conditional Random Fields
指導教授: 花凱龍
Kai-Lung Hua
口試委員: 陳永耀
none
簡士哲
none
鍾聖倫
none
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 英文
論文頁數: 47
中文關鍵詞: 背景擷取條件隨機場影像融合
外文關鍵詞: background extraction, conditional random fields, image fusion
相關次數: 點閱:218下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 背景擷取對於電腦視覺和擴增實境是很重要的應用,現有多數的背景擷取法不適用於複雜前景移動的影片。本論文提出了一種基於混合高斯條件隨機場的背景擷取法,評估出最佳frame 的權重從固定視頻影片擷取出背景。此方法藉由intra-frame 和inter-frame 的關係,考慮像素間空間和時間相干性以及對比,作為frame 的權重。因為背景被假設為靜止的,本論文也提出一種基於時間相干性的無動量(motion-less) patch 擷取。此外,此論文也建議了一種客觀的評估性能法。實驗結果表示,此論文提出的方法比多數最先進的演算法更加有效率和穩健性。


    Background extraction is important for applications in computer vision and augmented reality. Most existing methods are not suitable for video sequences containing complex foreground movement. Therefore, this work introduces a novel
    extraction method based on joint Gaussian conditional random fields (JGCRF) to
    estimate optimal frame weights for compositing a clear background from a fixed-view video sequence. The proposed algorithm analyzes the intra-frame and inter-frame relationship to consider spatial and temporal coherence and contrast distinctness among pixels for the basis of frame weights. Since a background is assumed to be static, a motion-less patch extractor is developed based on temporal coherence. Furthermore, an objective methodology is suggested in this work for performance evaluation. Quantitative and qualitative experimental results demonstrate the effectiveness and robustness of our approach over several state-of-art algorithms.

    教授推薦書-I 考試委員審定書-II 中文摘要-III Abstract-IV 誌謝-V Contents-VI List of Tables-VIII List of Figures-IX 1 Introduction-1 1.1 General Background Information-1 1.2 Related Work-1 1.3 Paper Framework-4 2 Method-7 2.1 Problem Formulation-7 2.2 Joint Gaussian Conditional Random Field for Background Extraction-8 2.3 Motion-less Patch Extraction-12 3 Experimental Results-17 3.1 Video Sequences and Ground Truth-17 3.2 Results and Comparisons-19 4 Conclusion-43 References-44

    [1] L. Yang, H. Cheng, J. Su, and X. Li, “Pixel-to-model distance for robust background reconstruction,” IEEE Transactions on Circuits and Systems for Video
    Technology, April 2015.
    [2] A. Colombari and A. Fusiello, “Patch-based background initialization in heavily cluttered video,” IEEE Transactions on Image Processing, vol. 19, pp. 926–933, Apr. 2010.
    [3] C.-C. Chen and J. K. Aggarwal, “An adaptive background model initialization
    algorithm with objects moving at different depths,” IEEE International Conference on Image Processing, 2008.
    [4] C. Stauffer and W. Grimson, “Adaptive background mixture models for realtime tracking,” in Proc. Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 246–252, Jun. 1999.
    [5] Background Initialization website. [Online]. Available: http://www.diegm.
    uniud.it/fusiello/demo/bkg/.
    [6] Scene Background Initialization (SBI) dataset. [Online]. Available: http://
    sbmi2015.na.icar.cnr.it/SBIdataset.html.
    [7] J.-W. Seo and S. D. Kim, “Recursive on-line (2D)2PCA and its application to
    long-term background subtraction,” IEEE Transactions on Multimedia, vol. 16,
    pp. 2333–2344, Dec. 2014.
    [8] J. Wen, Y. Xu, J. Tang, Y. Zhan, Z. Lai, and X. Guo, “Joint video frame
    set division and low-rank decomposition for background subtraction,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, pp. 2034–
    2048, Dec. 2014.
    [9] A. Staglianò, N. Noceti, A. Verri, and F. Odone, “Online space-variant background modeling with sparse coding,” IEEE Transactions on Image Processing,
    vol. 24, pp. 2415–2428, Aug. 2015.
    [10] X. Liu, G. Zhao, J. Yao, and C. Qi, “Background subtraction based on low-rank and structured sparse decomposition,” IEEE Transactions on Image Processing, vol. 24, pp. 2502–2514, Aug 2015.
    [11] D. K. Panda and S. Meher, “Detection of moving objects using fuzzy color
    difference histogram based background subtraction,” IEEE Signal Processing
    Letters, vol. 23, pp. 45–49, Jan. 2016.
    [12] X. Zhang, C. Zhu, S. Wang, Y. Liu, and M. Ye, “A bayesian approach for camouflaged moving object detection,” IEEE Transactions on Circuits and Systems
    for Video Technology, April 2016.
    [13] Scene Background Modeling and Initialization. [Online]. Available: http://
    changedetection.net/.
    [14] L. Maddalena and A. Petrosino, Background Modeling and Foreground Detection for Video Surveillance, ch. Background Model Initialization for Static Cameras, pp. 1–16. Chapman and Hall/CRC, 2014.
    [15] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: Principles
    and practice of background maintenance,” IEEE International Conference on
    Computer Vision, vol. 1, pp. 255–261, 1999.
    [16] W. Long and Y. Yang, “Stationary background generation: an alternative to
    the difference of two images,” Pattern Recognition, vol. 23, pp. 1351–1359, 1990.
    [17] L. Li, W. Huang, I. Y.-H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” Image Processing, IEEE Transactions on, vol. 13, pp. 1459–1472, Nov. 2004.
    [18] D. Russell and S. Gong, “A highly efficient block-based dynamic background
    model,” in Proc. of IEEE Conference on Advanced Video and Signal Based
    Surveillance, pp. 417–422, Sep. 2005.
    [19] V. Reddy, C. Sanderson, and B. Lovell, “An efficient and robust sequential
    algorithm for background estimation in video surveillance,” IEEE International
    Conference on Image Processing, pp. 1109–1112, 2009.
    [20] V. Reddy, C. Sanderson, and B. Lovell, “A low-complexity algorithm for static background estimation from cluttered image sequences in surveillance contexts,” Journal on Image and Video Processing, Apr. 2011.
    [21] S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao, “Region-level motion-based foreground segmentation under a bayesian network,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 19, pp. 522–532, Apr. 2009.
    [22] L. Maddalena and A. Petrosino, “Towards benchmarking scene background initialization,” SBMI 2015 Workshop in conjunction with ICIAP 2015, September 2015.
    [23] R. Shen, I. Cheng, and A. Basu, “QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF,” Image Processing, IEEE Transactions on, vol. 22, no. 6, pp. 2469–2478, 2013.
    [24] R. F. Tappen, C. Liu, E. H. Adelson, and W. T. Freema, “Learning Gaussian
    conditional random fields for low-level vision,” IEEE Conference on Computer
    Vision and Pattern Recognition, pp. 1–8, 2007.
    [25] H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications. Chapman and Hall/CRC, Boston, MA, 2005.
    [26] P. Soille, Morphological Image Analysis: Principles and Applications. Springer-Verlag New York, Inc., 2nd ed., 2003.

    無法下載圖示 全文公開日期 2022/01/10 (校內網路)
    全文公開日期 2027/01/10 (校外網路)
    全文公開日期 2027/01/10 (國家圖書館:臺灣博碩士論文系統)
    QR CODE