簡易檢索 / 詳目顯示

研究生: 林士山
SHIH-SHAN LIN
論文名稱: 結合物件資訊與SAD技術設計之移動物體偵測與追蹤系統
Integrating Object Data and SAD Method to Design a Motion Detection and Tracking System
指導教授: 楊英魁
Ying-Kuei Yang
口試委員: 黎碧煌
Bi-Huang Li
孫宗瀛
Zong-Ying Sun
李建南
Jian-Nan Li
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2009
畢業學年度: 97
語文別: 中文
論文頁數: 82
中文關鍵詞: 背景相減法影像處理SAD鏈碼目標物追蹤
外文關鍵詞: Moving Object Tracking, Background Subtraction Method, Image Processing, SAD, Chain Code
相關次數: 點閱:246下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 對於視覺追蹤系統而言,如何準確且快速的偵測出畫面中的移動目標物並進行追蹤,一直是相當重要的研究課題,本研究係以物件資訊與SAD技術為基礎,設計移動物體偵測與追蹤系統,並使用數位訊號處理器TI DM6437 DSP實現。在移動目標物偵測方面,背景相減法由於計算量少且偵測結果佳而被廣泛地使用。然而,在進行拍攝時可能受到光線與雜訊的干擾或者是目標物本身的顏色近似於背景的情況,該方法並不能有效的偵測出目標物,而使用型態學濾波的方式雖然可將雜訊濾除、填補目標物的空洞,但是會造成目標物的形變,並且運算複雜度高。有鑑於此,本文提出改良式的輪廓抽取與空洞填補演算法,在不破壞目標物輪廓的情況下,取得目標物的輪廓、標記資訊以及將目標物的空洞填補完成,可獲得較多的資訊並且運算複雜度低。此外,現實環境中,追蹤目標物通常為非單一目標物,在多目標物的追蹤方面,當目標物產生交會時,本文提出先計算出目標物的參數,藉由目標物的參數去判別目標物的先後位置,求得目標物先後位置後,再藉由目標物的邊界座標,縮小追蹤時所需搜尋的區域,可以有效降低運算複雜度,並且能夠追蹤出目標物交會時的位置與輪廓,由實驗結果可知效果良好。


    How to detect and track moving objects in video streams is an important and challenging research problem in visual tracking applications. Among the moving object detection algorithms, the background subtraction method has been widely use due to its less computational load and high detection quality. Nevertheless, May be interfered with by the light and noise or the color of object is similar to the background while shooting, this method can not effective detecting the object, and operation complexity is high. In view of this, we develop a method, in case of not destroying the contour of object, get the contour of the object, connect labeling information and fill the hole of the object, we can get more information and the operation complexity is low. In addition, in the realistic environment, tacking objects are usually non-single object, in the track of many objects, when the objects produced the intersection, we propose a judgement method, calculate the parameter of the object first, Differentiate the front or the back position of the object with the parameter of the object, after get the front or the back position of the object, and then, by judging the boundary coordinate of the object, narrow the search area while tracking object, can reduce operation complexity effectively, detect the object contour and the object position while intersection, can know the result is good by the experiment.

    第一章 緒論 1 1.1 前言 1 1.2 研究動機與目的 1 1.3 論文架構 2 第二章 文獻探討 4 2.1 影像前處理 4 2.1.1 平滑法 4 2.1.2 中值法 5 2.2 移動目標物偵測 7 2.1.1 光流法 8 2.1.2 運動能量法 9 2.1.3 背景相減法 13 2.3 修補目標物區塊 15 2.4 目標物追蹤 18 2.4.1 相似度量測 18 2.4.2 搜尋策略 20 第三章 即時多目標物移動物體偵測與追蹤系統 23 3.1 使用硬體與規格 23 3.2 系統流程與方法描述 25 3.2.1 影像擷取 26 3.2.2 移動目標物偵測程序 27 3.2.3 特徵抽取與特徵資料建置程序 42 3.2.4 目標物追蹤程序與影像輸出 46 第四章 系統實做結果 55 第五章 結論 60 5.1 研究成果 60 5.2 發展方向 61 參考文獻 62

    [1] D. B. Zhang, L. Van Gool ,and A. Oosterlinck, “Stochastic predictive control of robot tracking systems with dynamic visual feedback,” in Proc. in the 1990 IEEE International Conference on Robotics and Automation, Vol.1, pp. 610-615, 1990.
    [2] P. K. Allen, A. Timcenko, B. Yoshimi and P. Michelman, “Trajectory filtering and prediction for automated tracking and grasping of a moving object,” in Proc. in the 1992 IEEE International Conference on Robotics and Automation, Vol. 2, pp. 1850-1856, 1992.
    [3] P. Y. Oh and P. K. Allen, “Visual Servoing by Partitioning Degrees of Freedom,” IEEE Trans. on Robotics and Automation, Vol. 17, pp.1-17, 2001.
    [4] W. G. Yau, L. C. Fu and D. Liu, “Design and implementation of visual servoing system for realistic air target tracking,” in Proc. in the 2001 IEEE International Conference on Robotics and Automation, Vol. 1, pp. 229-234, 2001.
    [5] D. Ayers and M. Shah, “Recognizing human actions in a static room,” in Proc. of the 5th IEEE Workshop on Applications of Computer Vision, pp. 42-47, 1998.
    [6] K. Sage and S. Young, “Security applications of computer vision,” Aerospace and Electronic Systems Magazine, IEEE, Vol. 14, pp. 19-29, 1999.
    [7] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt and L. Wixson, “A System for Video and Monitoring,” Technical Report, CMU-RI-TR-00-12, Carnegie Mellon University, Pittsburgh, 2000.
    [8] A. J. Lipton, C. H. Heartwell, N. Haering and D. Madden, “Automated video protection , monitoring & detection,” IEEE Aerospace and Electronic Systems Magazine, Vol. 18, pp. 3-18, 2003
    [9] R. C. Gonzalez and R. E. Woods, Digital Image processing, 2nd edition, Prentice Hall, 2002, New Jersey.
    [10] P. J. Burt, C. Yen and X. Xu, “Local Correlation Measures for Motion Analysis: a Comparitive Study,” in Proc. in the IEEE Conference on Pattern Recognition Image Processing, pp. 269-274, 1982,.
    [11] J. P Lewis, “Fast Template Matching,” Vision Interface, pp.120-123, 1995.
    [12] P. Anandan, “Measuring Visual Motion from Image Sequences,” Technical Report, COINS-TR-87-21, COINS, Massachusetts University, 1987.
    [13] S. Hutchinson, G. D. Hager, and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol. 12, pp. 651-670, 1996.
    [14] W. Brand, “Morphable 3D models from video,” in Proc. in the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 456-463, 2001.
    [15] N. Weng, Y. H. Yang, and R. Pierson, “3D surface reconstruction using optical flow for medical imaging,” in Proc. in the 1996 IEEE Conference on Nuclear Science Symposium, Vol. 3, pp. 1845-1849, 1996.
    [16] J. F. Cohn, A. J. Zlochower, J. J. Lien and T. Kanade, “feature-point tracking by optical flow discriminates subtle differences in facial expression,” in Proc. in the 3th International Conference on Automatic Face and Gesture Recognition, pp. 396-401, 1998.
    [17] A. J. Lipton, “Local application of optic flow to analyze rigid versus non-rigid motion,” Technical Report, CMU-RI-TR-99-13, Robotics Institute, Carnegie Mellon University, 1999.
    [18] J. L. Barron, D. J. Fleet, S. S. Beauchemin and T. A. Burkitt, “Performance of Optical Flow Techniques,” in Proc. in the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 236-242, 1992.
    [19] K. Dawson-Howe, “Active Surveillance using dynamic background subtraction,” Technical Report, TCD-CS-96-06, Trinity College, 1996.
    [20] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt and L. Wixson, “A System for Video and Monitoring,” Technical Report, CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, 2000.
    [21] N. Ohta, “A statistical approach to background subtraction for surveillance systems,” in Proc. in the 8th IEEE International Conference on Computer Vision, Vol. 2, pp. 481-486, 2001.
    [22] B. Gloyer, H. K. Aghajani, and T. Kailath, “Video-based freeway monitoring system using recursive vehicle tracking,” in Proc. in IS&T-SPIE Symposium on Electronic Imaging: Image and video Processing, 1995.
    [23] D. Guo, Y. C. Hwang, Y. C. L. Adrian and C. Laugier, “Traffic monitoring using short-long term background memory,” in Proc. of the IEEE 5th International Conference on Intelligent Transportation Systems, pp. 124-129, 2002.
    [24] C. Wren, A. Azarbyayejani, T. Darrell, and A. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 19, pp. 780-785, 1997.
    [25] H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” in Proc. of IEEE WACV98, pp. 15-21, 1998.
    [26] W. Grimson, C. Stauffer, R. Romano, and L. Lee, “Using adaptive tracking to classify and monitor activities in a site,” in Proc. in the IEEE Conference on Computer Vision and Pattern Recognition, pp. 22-29, 1998.
    [27] K. Nickels and S. Hutchinson, “Model-based tracking of complex articulated objects,” IEEE Tran. on Robotics and Automation, Vol. 17, pp. 28- 36, 2001.
    [28] T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,” IEEE Tran. On Pattern Analysis and Machine Intelligence, Vol. 24, pp. 932-946, 2002.
    [29] P. J. Burt, C. Yen and X. Xu, “Local Correlation Measures for Motion Analysis: a Comparitive Study,” in Proc. in the IEEE Conference on Pattern Recognition Image Processing, pp. 269-274, 1982,.
    [30] J. P Lewis, “Fast Template Matching,” Vision Interface, pp.120-123, 1995.
    [31] P. Anandan, “Measuring Visual Motion from Image Sequences,” Technical Report, COINS-TR-87-21, COINS, Massachusetts University, 1987.
    [32] S. Hutchinson, G. D. Hager, and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol. 12, pp. 651-670, 1996
    [33] 鐘國亮編著,影像處理與電腦視覺,第三版,東華書局,2006
    [34] 王韻婷,The Implementation of a Real-Time Object Estimating System Using the Binocular Stereo Vision Based on DSP,國立台灣科技大學電機工程系,2007
    [35] 郭智誠,Intelligent aquarium fish navigation system,碩士論文,崑山科技大學資訊管理系,2006

    QR CODE