簡易檢索 / 詳目顯示

研究生: 黃健祐
Jian-You Huang
論文名稱: 植基於熵理論之瞬間光影變化場景之智慧型物件偵測
Intelligent Object Detection for Sudden Illumination Environments Based on Entropy Theory
指導教授: 阮聖彰
Shanq-Jang Ruan
口試委員: 鐘國亮
Kuo-Liang Chung
許孟超
Mon-Chau Shie
林昌鴻
Chang-Hong Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2010
畢業學年度: 98
語文別: 英文
論文頁數: 45
中文關鍵詞: 影像監控物件偵測瞬間光影變化
外文關鍵詞: Surveillance, motion detection, sudden illumination change, entropy.
相關次數: 點閱:148下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 對影像監控系統來說,偵測移動物體是相當重要的步驟,在移動物體偵測的方法中,主要有兩種方法,其中一種就是背景相減法。這個方法必須要針對影像中的每一個幅影像產生正確的背景影像,但是在許多以背景相減法為基礎的方法中,皆會在處理有瞬間光影變化現象的影片時出現明顯的錯誤,特別是在燈光瞬間開啟或關閉的時候。所以為了分析光影變化現象的產生,以及適當的去挑選出在影像中屬於背景的部份來產生良好的背景模型,本篇論文提出了一個創新的移動物體偵測方法。在我們提出的方法中,我們分別使用了 frame-based、 block-based、 pixel-based 的方法,使我們的背景模型能夠經由挑選出適當的背景部分來有效且穩定的更新背景,在整個方法的最後一個部份,我們會產生二值化的物件偵測遮罩來表示我們的偵測結果。
    我們的方法會與其他方法做視覺效果上的比較,並且也利用數據上的分析來加以佐證。最後,實驗結果顯示出我們的方法不論是在瞬間光影變化的場景抑或者是在一般的場景中,都可以從視覺上明顯的看出成效,並且從數據中也可以發現我們的方法明顯的優於其它的方法。


    The detection of moving objects in video streams is of critical importance in the process of information extraction for video surveillance systems. Background subtraction is a popular motion detection method in which it is necessary to generate the correct background image for each frame of the video stream. However, many previous state-of-the-art background subtraction approaches experience signi cant errors during sudden illumination changes. This is especially true in regard to motion detection when light is suddenly switched on or off . In order to analyze the illumination change and determinate the background candidates for accurate motion detection, we propose a novel motion detection approach in this paper. The proposed method makes appropriate use of frame-based, block-based, and pixel-based schemes for allowing our proposed background model to be selectively updated via an e ective stable signal training procedure. As the nal step of our process, the binary object detection mask will be generated by our mask computation procedure.

    The results of the proposed method are compared with those produced by other
    state-of-the-art methods, with analyses performed both through qualitative visual analysis as well as quantitatively through the use of metrics and an error detection ratio. The overall results of these analyses demonstrate that our proposed method attains a substantially higher degree of e cacy, and performs better than other state-of-the-art methods not only in respect to sudden illumination changes but also in environments exhibiting uniform illumination.

    Table of Contents iv List of Figures vi Abstract viii 1 Introduction 1 1.1 Observation and Motivation . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Organization of This Thesis . . . . . . . . . . . . . . . . . . . . . . . 5 2 Related Techniques of Object Detection 7 2.1 Simple Background Subtraction . . . . . . . . . . . . . . . . . . . . . 7 2.2 Running Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 ? Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4 Multiple ? Estimation . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5 Simple Statistical Di erence . . . . . . . . . . . . . . . . . . . . . . . 11 2.6 Temporal Median Filter . . . . . . . . . . . . . . . . . . . . . . . . . 12 3 Proposed Method 13 3.1 Illumination Evaluation Module . . . . . . . . . . . . . . . . . . . . . 14 3.2 Background Recognition Module . . . . . . . . . . . . . . . . . . . . . 16 3.3 Background Modeling Module . . . . . . . . . . . . . . . . . . . . . . 19 3.4 Motion Detection Module . . . . . . . . . . . . . . . . . . . . . . . . 20 4 Experiments Rusults 23 4.1 Background Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 Motion Detection Result . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.3 Computation Complexity . . . . . . . . . . . . . . . . . . . . . . . . . 34 5 Implementation 36 5.1 Software Model Implementation . . . . . . . . . . . . . . . . . . . . . 36 5.2 Hardware Acceleration Architecture . . . . . . . . . . . . . . . . . . . 38 6 Conclusion 40 Bibliography 42

    [1] D. Koller, K. Daniilidis, and H.-H. Nagel, "Model-based object tracking in monocular image sequences of road tra c scenes," International Journal of Computer Vision, vol. 10, no. 3, pp. 257-281, June 1993.
    [2] Z. Zhu, G. Xu, B. Yang, D. Shi, and X. Lin, "Visatram: A real-time vision system for automatic tra c monitoring," Image and Vision Computing, vol. 18, no. 10, pp. 781{794, July 2000.
    [3] S. L. Dockstader and A. M. Tekalp, "Multiple camera tracking of interacting and occluded human motion," in Proceedings of the IEEE, vol. 89, no. 10, 2001, pp.
    1441-1455.
    [4] S. Park and J. Aggarwal, "A hierarchical bayesian network for event recognition of human actions and interactions," Multimedia Systems, vol. 10, no. 2, pp. 164-179, August 2004.
    [5] P. C. Uson, K. Hagihara, D. Ruiz, and B. Macq, "Multiagent visual surveillance of dynamic scenes," Image and Vision Computing, vol. 16, no. 8, pp. 529-532, June 1998.
    [6] T. Huang and S. Russell, "Object identi cation: a bayesian analysis with application to tra c surveillance," Arti cial Intelligence, vol. 103, no. 1-2, pp. 77-93, August 1998.
    [7] G. Foresti, "A real-time system for video surveillance of unattended outdoor environments," IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, no. 6, pp. 697-704, October 1998.
    [8] M. Haag and H.-H. Nagel, "Incremental recognition of tra c situations from video image sequences," Image and Vision Computing, vol. 18, no. 2, pp. 137-153, January 2000.
    [9] T. Darrell, G. Gordon, M. Harville, and J. Wood ll, "Integrated person tracking using stereo, color, and pattern detection," International Journal of Computer Vision, vol. 37, no. 2, pp. 175{185, June 2000.
    [10] J. M. Ferryman, S. J. Maybank, and A. D. Worrall, "Visual surveillance for moving vehicles," International Journal of Computer Vision, vol. 37, no. 2, pp. 187-197, June 2000.
    [11] I. Haritaoglu, D. Harwood, and L. S. Davis, "W4: Real-time surveillance of people and their activities," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 809-830, August 2000.
    [12] N. M. Oliver, B. Rosario, and A. P. Pentland, "A bayesian computer vision system for modeling human interactions," IEEE Transactions on Pattern Analysis
    and Machine Intelligence, vol. 22, no. 8, pp. 831-843, August 2000.
    [13] C. R. Wren, A. J. Azarbayejani, T. J. Darrell, and A. P. Pentland, "P nder: Real-time tracking of the human body," IEEE Trans. Pattern Anal. Machine Intell, vol. 19, no. 7, pp. 780-785, July 1997.
    [14] S. J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, "Tracking groups of people," Computer Vision and Image Understanding, vol. 80, pp. 42-56, 2000.
    [15] A. Manzanera and J. C. Richefeu, "A robust and computationally e cient motion detection algorithm based on - background estimation," in Proc. ICVGIP'04, 2004, pp. 46-51.
    [16] G. Pajares, "A hop eld neural network for image change detection," IEEE Transactions on Neural Networks, vol. 17, no. 5, pp. 1250-1264, September 2006.
    [17] D. Culibrk, O. Marques, D. Socek, H. Kalva, and B. Furht, "Neural network approach to background modeling for video object segmentation," IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1614-1627, November 2007.
    [18] A. Manzanera and J. C. Richefeu, "A new motion detection algorithm based on - background estimation," Pattern Recognition Letters, vol. 28, no. 3, pp. 320-328, February 2007.
    [19] M. Oral and U. Deniz, "Centre of mass model - a novel approach to background modelling for segmentation of moving objects," Image and Vision Computing, vol. 25, no. 8, pp. 1365-1376, August 2007.
    [20] W. Wang, J. Yang, and W. Gao, "Modeling background and segmenting moving objects from compressed video," IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 5, pp. 670{681, May 2008.
    [21] L. Maddalena and A. Petrosino, "A self-organizing approach to background subtraction for visual surveillance applications," IEEE Trans. Image Process., vol. 17, no. 7, pp. 1168-1177, July 2008.
    [22] B. Shoushtarian and N. Ghasem-aghaee, "A practical approach to real-time dynamic background generation based on a temporal median lter," Journal of Science Islamic Republic of Iran, vol. 14, pp. 351-362, 2003.
    [23] B. Klare and S. Sarkar, "Background subtraction in varying illuminations using an ensemble based on an enlarged feature set," in IEEE Conference on Computer Vision and Pattern Recognition, no. 28, June 2009, pp. 66-73.
    [24] J. A. Vijverberg, M. J. Loomans, C. J. Koeleman, and P. H. De With, "Global illumination compensation for background subtraction using gaussian-based background di erence modeling," in IEEE International Conference on Advanced
    Video and Signal Based Surveillance, 2009, pp. 448-453.
    [25] J.-E. Ha and W.-H. Lee, "Foreground objects detection using multiple di erence images," Optical Engineering, vol. 49, no. 047201, Apr. 2010.
    [26] "Opencv." [Online]. Available: http://opencv.willowgarage.com/wiki/

    QR CODE