簡易檢索 / 詳目顯示

研究生: 朱庭葳
Ting-Wei Chu
論文名稱: 以邊緣強化之SIFT偵測移動物件
Edge Enhanced SIFT for Moving Object Detection
指導教授: 蘇順豐
Shun-Feng Su
口試委員: 王偉彥
Wei-Yen Wang
陳美勇
Mei-Yung Chan
徐勝均
Sheng-Dong Xu
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 英文
論文頁數: 108
中文關鍵詞: 移動偵測智慧監控行人偵測尺度不變特徵轉換奇異值去除
外文關鍵詞: Smart surveillance, LoG, Outliers removal
相關次數: 點閱:253下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究是對視訊監控影像作移動物偵測。如今有許許多多對視訊監控影像作移動偵測的方法,在一般運動偵測中,最常見的方法是對物體找出特定的特徵點,並在兩張影像中計算此特徵點的移動,並以此定義物體移動之速度。然而,這些人為所定義的特徵點是較難定義也較難獲得的,特別是當目標物在無法事先得知的情況下更為困難。在本研究中使用了尺度不變特徵轉換(SIFT)演算法來定義移動物件的特徵點。在傳統上,SIFT用於靜態影像之特徵比對,它具有尺度、位置以及旋轉的不變性,因此即使目標物部分被遮蔽或是目標物從不同角度做拍攝,SIFT皆能夠有良好的測量與匹配結果。但是,當檢測的物體移動時,SIFT的匹配結果則不如預期中良好,這是因為物體在移動時可能存在著不正確的特徵點。在本研究中,我們提出了增強影像中邊緣的特性,以此提高特徵匹配的情況,此移動物件偵測方法稱為LoG-SIFT。除此之外,本研究中我們利用移動角度之統計直方圖來做分析,以獲得特徵點的移動角度,並利用正確的特徵點做更進一步的計算。本研究也以此排出匹配點中的奇異值,以此提高偵測的準確度。在實驗結果中,我們展示了此方法相較於傳統SIFT上,能有更良好的辨識能力。


    This thesis is to report our study on the moving object detection from surveillance images. Nowadays, there are many methods employed for object detection in video based surveillance. For motion detection, usually the method used is to find specific features and then to use the motion of those features between images to define speeds of objects. However, those human-created features may be difficult to define and to acquire, especially when the objects may not be known in advance. In this study, the scale-invariant feature transform (SIFT) method is adopted to define features for motion detection. Traditionally, SIFT is used for static images. Since it possesses the properties of scale and rotation invariant, even if the foreground target is partially obscured and the image is taken in a different angle and distance, SIFT can still have nice matching performances. However, when applied in detecting moving objects, SIFT does not work well due to incorrect features found in the match. It is because when objects move, there may exist incorrect matched features. In this study, we propose to enhance the edge properties in the image to improve the feature matching situations. Thus, a Laplace of Gaussian (LoG) based SIFT is proposed for motion detection. Besides, in this study, the moving angle histogram is built and analyzed to obtain the majority of those features with similar moving angles. Those features are considered as those correct features for further motion identification. A way of defining outliers for matched features is also proposed to improve the detection accuracy. The experimental results in our study showed that the proposed approach can have much better recognition performance than that of using the original SIFT approach.

    中文摘要 I Abstract II 致謝 III Contents IV Figure list VII Table list X Chapter 1 Introdution 1 1.1 Backdround 1 1.2 Motivation 3 1.3 Organization 5 Chapter 2 Related Work 6 2.1 Pre-process 6 2.1.1 Smoothing Image 6 2.1.2 Image Enhancement 8 2.2 Foreground Detection 11 2.3 Human Detection 12 Chapter 3 Foreground Detection 15 3.1 Background Reconstruction and Update 15 3.1.1 Single Gaussian Background Model 16 3.1.2 Gaussian Mixture Model 17 3.1.3 Expectation Maximiaztion (EM) 20 3.1.4 GMM Reconstruction and Update 20 3.2 Histogram of Oriented Gradient (HOG) 25 3.2.1 Object Feature Extraction based on HOG 26 3.2.2 Object Detection based on SVM 32 3.2.3 Database of Training Data 40 3.3 Object Labeling 42 Chapter 4 Feature Extraction 45 4.1 Scale-invariant Feature Transform 45 4.1.1 Scale-space Extrema Detection 46 4.1.2 Keypoint Localization 49 4.1.3 Orientation Assignment 51 4.1.4 Keypoint Descriptor 51 4.1.5 Keypoint Matching 53 4.2 SIFT Optimization 54 4.2.1 Image Sharpening 54 4.2.2 LoG-SIFT 57 4.3 Histogram-based Outlier Removal 63 4.4 In a Fixed Time Frame Outlier Removal 65 Chapter 5 Experiment Results 66 5.1 Background Reconstruction 66 5.2 Denoise Foreground Image 67 5.3 HOG for Human Detection 68 5.4 Improve SIFT 69 5.4.1 Using Image Sharpening 70 5.4.2 Using LoG 75 5.5 Comparison of SIFT and LoG-SIFT 76 5.5.1 Moving Image 77 5.5.2 Static Image 84 5.6 Speed Calculation 85 Chapter 6 Conclusions and Future Work 87 6.1 Conclusions 87 6.2 Future work 88 Reference 90

    [1] F. Bin, X. Xinyan, and S. Guozhen, ”An efficient mean filter algorithm,” Complex Medical Engineering, pp. 466-470, 2011.
    [2] C.S. Lee, and Y.H. Kuo, ”Adaptive weighted fuzzy mean filter,” Fuzzy Systems, vol.3, pp. 2110–2116, 1996.
    [3] J.W. Tukey, “Nonlinear (Nonsuperposable) method for smoothing data,” Electronics and Aerospace Systems Conference, pp. 673, 1974.
    [4] S.J. Ko, and Y.H. Lee, ”Center weighted median filters and their applications to image enhancement,” IEEE Transactions on Circuits and systems, vol. 38, pp. 984-933, 1991.
    [5] H.K. Kwan, ” Fuzzy filters for noisy image filtering,” International Symposium on Circuits and Systems, vol. 4, pp. 161-164, 2003.
    [6] T. Chen, K. Kuang, and L.H. Chen “Tri-state median filter for image denoising,” IEEE Transactions on Image Processing, vol. 8, no.12, pp.1834-1838, 1999.
    [7] T. Chen, and H.R. Wu, “Adaptive impulse detection usingcenter-weighted median filter,” IEEE Signal Processing Letters, vol.8, no. 1, Jan, 2001.
    [8] D.C.C. Wang, A.H. Vagnucci, and C.C. Li , “Digital image enhancement: A survey,” Computer vision, graphics, and image processing, vol. 24, pp. 363-381, 1983.
    [9] R. Maini and H. Aggarwal, “A comprehensive review of image enhancement techniques,” Journal of Computing, vol. 2, 2010.
    [10] W.D. Stanley, “Technical Analysis And Applications With Matlab,” Cengage Learning, pp. 143, 2004.
    [11] J.G. Leu, “Image contrast enhancement based on the intensities of edge pixels,” Graphical Models and Image Processing, vol. 54, pp. 497-506, 1992.
    [12] H. Zhu, F.H.Y. Chan, and F.K. Lam, “Image contrast enhancement by constrained local histogram equalization,” Computer Vision and Image Understanding, vol. 73, pp. 281-290, 1999.
    [13] J.G. Leu, “Image contrast enhancement based on the intensity of edge pixels,” Graphical Models and Image Processing, vol. 54, pp. 497-506, 1992.
    [14] A. Raji, A. Thaibaoui, E. Petit, P. Bunel, and G. Mimoun, “A gray-level transformation-based method for image enhancement,” Pattern Recognition Letters, vol. 19, pp. 1207-1212, 1998.
    [15] A.K. Jain, “Fundamentals of Digital Image Processing,” Englewood Cliffs, NJ: Prentice Hall, pp. 249-250, 1989.
    [16] G. Ramponi, and G.L. Sicuranza, “Quadratic digital filters for image processing,” IEEE Transactions on ASSP, vol. 36, no. 6, pp. 937-939, 1988.
    [17] S. Guillon, P. Baylou, M. Najim, and N. Keskes, “Adaptive nonlinear filters for 2D and 3D images enhancement,” Signal Processing, vol. 67, pp. 237-254, 1998.
    [18] G. Ramponi, “A cubic unsharp masking technique for contrast enhancement,” Signal Processing, vol. 67, pp. 211-222, 1998.
    [19] Y.H. Lee, and A.T. Fam, “An edge gradient enhancing adaptive order statistics filters,” IEEE Transactions on ASSP, vol. 35, pp. 680-695, 1987.
    [20] J.B. Martens, “Adaptive contrast enhancement through residue-image processing,” Signal Processing, vol. 44, pp. 1-18, 1995.
    [21] J.A.S. Centeno, and V. Haertel, “An adaptive image enhancement algorithm,” Pattern Recognition, vol. 30, pp. 1183-1189, 1997.
    [22] M.A. Sid-Ahmed, “Image Processing: Theory, Algorithms, and Architectures,” McGraw-Hill, pp. 58-84, 1995.
    [23] T.G. Stockham, “Image processing in the context of a visual model,” Proceedings of the IEEE, vol. 60, No. 7, pp. 828-842, 1972.
    [24] M. Jourlin, and J. Pinoli, ”Image dynamic range enhancement and stabilization in the context of the logarithmic image processing model,” Signal Processing, vol. 41, pp. 225-237, 1995.
    [25] H.G. Adlmann, “Butterworth equations for homomorphic filtering of images,” Computers in Biology and Medicine, vol. 28, pp. 169-181, 1998.
    [26] P. Perona, and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on PAMI, vol. 12, pp. 629-639, 1990.
    [27] M. Nitzberg, and T. Shiota, “Nonlinear image filter with edge and corner enhancement,” IEEE Transaction on PAMI, vol. 14, pp. 826-833, 1992.
    [28] J. Maeda, T. Iizawa, T. Ishizaka, C. Ishikawa, and Y. Suzuki, “Segmentation of natural images using anisotropic diffusion and linking of boundary edges,” Pattern Recognition, vol. 31, pp. 1993-1999, 1998.
    [29] S. Biswas, N.R. Pal, and S.K. Pal, “Smoothing of digital images using the concept of diffusion process,” Pattern Recognition, vol. 29, pp. 497-510, 1996.
    [30] Y.L. You, W. Xu, A. Tannenbaum, and M. Kaveh, “Behavioral analysis of anisotropic diffusion in image processing,” IEEE Transactions on Image, vol. 5, no. 11, 1996.
    [31] Y.Z. Hsu, H.H. Nagel, and G. Rekers, “New likelihood test methods for change detection in image sequences,” CVGIP, vol. 26, pp. 73-106, 1984.
    [32] R. Jain, “Extraction of motion information from peripheral processes,” IEEE Transactions on PAMI, vol. 3, pp. 489-504, 1981.
    [33] I. Dinstein. “A new technique for visual motion alarm,” Pattern Recognition Letters, vol. 8, pp. 347-351, 1988.
    [34] M. Fathy, and M.Y. Siyal, “An image detection technique based on morphological edge detection and background differencing for real-time traffic analysis,” Pattern Recognition Letters, vol. 16, pp. 1321-1330, 1995.
    [35] A.J. Lipton, H. Fujiyoshi, and R.S. Patil, “Moving target classification and tracking from real-time video,” Fourth IEEE Workshop on Applications of Computer Vision, pp. 8-14, 1998.
    [36] D.J. Dailey, F.W. Cathey, and S. Pumrin, “An algorithm to estimate mean traffic speed using uncalibrated cameras,” IEEE Trans. on Intelligent Transportation Systems, vol. 1, pp. 98 –107, 2000.
    [37] S. Munder, C. Schnörr, and D.M. Gavrila, “Pedestrian Detection and Tracking Using a Mixture of View-Based Shape-Texture Models,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 2, 2008.
    [38] D.M. Gavrila, J. Giebel, and S. Munder, “Vision-based Pedestrian Detection: The PROTECTOR System,” IEEE Intelligent Vehicles Symposium, pp. 13-18, 2004.
    [39] C. Stauffer, and W.E.L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE Computer Society Conference on CVPR, vol 2, pp. 246-252, 1999.
    [40] C. Papageorgiou, and T. Poggio, “A Trainable System for Object Detection,” International Journal of Computer Vision, vol. 38, no. 1, pp. 15-33, 2000.
    [41] A. Mohan, C. Papageorgiou, and T. Poggio, “Example-Based Object Detection in Images by Components,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 4, pp. 349-361, 2001.
    [42] L. Malagon-Borja, and O. Fuentes, “Object detection using image reconstruction with PCA,” Image and Vision Computing, vol. 27, pp. 2-9, 2005.
    [43] P.A. Viola, and M.J. Jones, “Robust Real-time Face Detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.
    [44] J. Heikkila, and O. Silven, “A Real-time System for Monitoring of Cyclists and Pedestrians,” Second IEEE Workshop on Visual Surveillance, pp. 74-81, 1999.
    [45] C.F. Juang, and L.T. Chen, “Moving Object Recognition by a Shape-based Neural Fuzzy Network,” Neurocomputing, vol. 71, pp. 2937-2949, 2008.
    [46] D.M. Gavrila, and V. Philomin, “Real-time object detection for smart vehicles,” IEEE International Conference on Computer Vision, vol. 1, pp. 87-93, 1999.
    [47] D.M. Gavrila, “A Bayesian, exemplar-based approach to hierarchical shape matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1408-1421, 2007.
    [48] B. Leibe, E. Seemann, and B. Schiele, “Pedestrian detection in crowded scenes,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 878-885, 2005.
    [49] E. Seemann, B. Leibe, and B. Schiele, “Multi-aspect detection of articulated objects,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1582-1588, 2006.
    [50] Z. Lin, L.S. Davis, D. Doermann, and D. DeMenthon, “Hierarchical part-template matching for human detection and segmentation,” IEEE International Conference on Computer Vision, pp. 1-8, 2007.
    [51] D.M. Gavrila, “Pedestrian Detection from a Moving Vehicle,” IEEE International Conference on European Conference on Computer Vision, pp. 37-49, 2002.
    [52] A. Mohan, C. Papageorgiou, and T. Poggio, “Example-based object detection in images by components,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 4, pp. 349-361, 2001.
    [53] B. Wu, and R. Nevatia, “Detection of multiple, partially occluded humans in a single image by bayesian combination of edgelet part detectors,” IEEE International Conference on Computer Vision, pp. 90-97, 2005.
    [54] N. Dalal, and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.
    [55] Q. Zhu, S. Avidan, M.C. Yeh, and K.T. Cheng, “Fast human detection using a cascade of histograms of oriented gradients,” IEEE International Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1491-1498, 2006.
    [56] Y.T. Chen, and C.S. Chen, “A cascade of feed-forward classifiers for fast pedestrian detection,” IEEE Asian Conference on Computer Vision, pp. 905-914, 2007.
    [57] P. Viola, and M. Jones, “Rapid object detection using a boosted cascade of simple features,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001.
    [58] P. Viola, and M.J. Jones, “Detecting Pedestrians Using Patterns of Motion and Appearance,” IEEE International Conference on Computer Vision, vol. 2, pp. 734-741, 2003.
    [59] Z. Hao, B. Wang, and J. Teng, “Fast Pedestrian Detection Based on Adaboost and Probability Template Matching,” IEEE International Conference on Advanced Computer Control, vol. 2, pp. 390-394, 2010.
    [60] N. Dalal, “Finding People in Images and Videos,” Ph.D. thesis from Institut National Polytechnique de Grenoble, 2006.
    [61] D.G. Lowe, “Distinctive Image Features from Scale-invariant keypoints,” Intermational Journal of Computer Vistion, vol. 60, pp. 91-110, 2004.
    [62] M. Muja, and D.G. Lowe, “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration,” VISAPP, pp. 331-340, 2009.

    QR CODE