簡易檢索 / 詳目顯示

研究生: 張博勝
Bo-sheng zhang
論文名稱: 以彩色影像結合深度影像為基礎之背景濾除
A depth image assisted method for color image background subtraction
指導教授: 鍾國亮
Kuo-Liang Chung
口試委員: 吳怡樂
Yi-Leh Wu
花凱龍
Kai-Lung Hua
徐繼聖
Gee-Sern Hsu
陳建中
Jiann-Jone Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 42
中文關鍵詞: 背景濾除前景偵測背景模型深度影像
外文關鍵詞: Background subtraction, background modeling, foreground detection, depth map
相關次數: 點閱:208下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 背景濾除(Background Subtraction)演算法是一個相當重要的議題,用來有效率的將前景物件分割出來並供以後續之分析與應用。然而,系統中首先需要建立出良好的背景模型,才能增加前景偵測的準確率,並降低將雜訊誤判為前景的錯誤率。本論文提出彩色影像結合深度影像的雙背景模型建構方法,在移動物件複雜的區域,亦能過濾出準確度高的背景模型,接下來針對彩色影像與深度影像分別進行背景濾除,並透過前景合併的動作強化物件的完整度並去除動態背景造成的雜訊。最後,為了維持背景模型對環境光影變化的適應性,隨著時間的流逝,我們會持續不斷地更新背景模型,以確保背景模型的適應性。


    Background subtraction is an important process to extract foreground objects for real-time analysis, such as tracking and recognition. The major task in background subtraction is to generate a background model which is then subtracted from the frame to extract the foreground objects. In this thesis, we propose a novel background modeling algorithm that utilizes color and depth information to generate two background models for subtraction, with one background model based on the depth information only. The generated background models are, respectively, subtracted to obtain the color and depth foregrounds which are then combined to yield the final foreground objects. The proposed method can substantially remove the noise while preserving the details, especially in high-traffic area. Empirical results showed that the proposed method outperforms the existing background subtraction algorithms in both quantitative and qualitative measures.

    摘要 i Abstract ii 圖表目錄 v 第一章 緒論 1 1.1研究目的與動機 1 1.2系統簡介 1 1.2.1背景濾除演算法 1 1.2.2 背景濾除法常見的問題 2 1.2.3 深度影像 4 1.3論文架構 6 第二章 相關文獻與探討 7 2.1平均值模型 7 2.2中值法模型 8 2.3 機率式模型 9 2.4 結合深度之高斯混合模型 10 第三章 以彩色與深度資訊之背景濾除 12 3.1 系統流程 12 3.2 雙背景模型的建構 13 3.2.1 以深度資訊建構的背景模型 14 3.2.2 以彩色資訊建構的背景模型 15 3.3 背景濾除 18 3.4 前景合併 20 3.5 背景更新 22 第四章 實驗結果 23 4.1 評估方式 24 4.2 方法比較 25 第五章 結論 30 參考文獻 31

    [1] C. C. Chiu, M. Y. Ku, and L. W. Liang, “A robust object segmentation system using a probability-based background extraction algorithm,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 20, no. 4, pp. 518-528, 2010.
    [2] R. Cutler and L. Davis, “View-based detection and analysis of periodic motion, ” in Proc. International Conference on Pattern Recognition(ICPR), vol. 1, pp. 495-500, 1998.
    [3] K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, “Real-time foreground-background segmentation using codebook model, ” Real-time Imaging, vol. 11, no. 3, pp. 167-256, 2005.
    [4] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body, ” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.
    [5] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), vol. 2, pp. 246-252, 1999.
    [6] [Online]. Available:
    http://research.microsoft.com/en-us/um/people/jckrumm/WallFlower/TestImages.htm
    [7] M. Domanski, T. Grajek, K. Klimaszewski, M. Kurc, O. Stankiewicz, J. Stankowski, K. Wegner, “Poznan multiview video test sequences and camera parameters”, ISO/IEC JTC1/SC29/WG11 m17050, Xian, China, 2009.
    [8] A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction, ” in Proc. of IEEE International Conference on Computer Vision FRAME-RATE Workshop, 1999.
    [9] H. Weiming, T. Tieniu, W. Liang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. on Systems, Man and Cybernetics, Part C, vol. 34, no. 3, pp. 334 -352, 2004.
    [10] D. Meyer, J. Denzler, and H. Niemann, “Model based extraction of articulated objects in image sequences for gait analysis,” in Proc. of IEEE International Conference on Image Processing( ICIP), pp. 78-81, 1998.
    [11] L. Wixson, “Detecting salient motion by accumulating directionally-consistent flow,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 774-780, 2000.
    [12] M. Harville, G. Gordon, and J. Woodfill, “Adaptive video background modeling using color and depth”, in Proc. of International Conference on Image Processing(ICIP), vol. 3, pp. 90-93, 2001.
    [13] E. Parvizi, Q.M.J. Wu, “Multiple object tracking based on adaptive depth segmentation”, in Proc. of the 5th Canadian Conference on Computer and Robot Vision(CRV), pp. 273-277, 2008.
    [14] D. Tian, P. Lai, P. Lopez, and C. Gomila, “View synthesis techniques for 3D video,” in Proc. of SPIE, 7443, San Diego, CA, 2009.

    QR CODE