簡易檢索 / 詳目顯示

研究生: 林裕超
Yu-Chao Lin
論文名稱: 遠端影像監控之立體視覺目標物追蹤與量測系統
Image Monitoring for Object Tracking and Measurement System by Stereo Vision
指導教授: 蔡超人
Chau-Ren Tsai
口試委員: 蘇順豐
Shun-Feng Su
郭景明
Jing-ming Guo
王乃堅
Nai-Jian Wang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2011
畢業學年度: 99
語文別: 中文
論文頁數: 157
中文關鍵詞: 雙眼視覺立體座標機器人視覺目標物追蹤
外文關鍵詞: Binocular Vision, object coordinate, Machine Vision, Object Tracking
相關次數: 點閱:177下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近幾年來隨著影像處理技術與數位信號處理器的進步,影像處理的技術已不僅僅只能局限在處理單張影像。立體視覺主要是藉由兩台攝影機模擬人類視覺的方式,計算出目標物的三維座標資訊(垂直、水平及深度),因此本論文結合德州儀器TMS320DM642數位信號處理器(DSP:Digital Signal Processor)模組及雙PTZ攝影機來實現立體視覺的架構,並對目標物進行追蹤與量測,當系統計算出目標物的三維座標後,再藉其資訊控制第三台PTZ攝影機轉動使雷射筆指向目標物並進行雷射指向修正,本系統中的第三台PTZ攝影機用途為雷射筆旋轉裝置,在此PTZ攝影機上方有架設一組雷射筆。最後本系統將結合網路傳輸的架構以DSP平台當作Server端,而遠端電腦為Client端,藉由網路使用者可遠端操控本系統,已實現一套具遠端監控能力的立體視覺之目標物追蹤與量測系統。


    With the advances of image processing and digital signal processor, image processing technology has not only limited to dealing with single images. Binocular Stereo Vision using two cameras to simulate human vision, and can calculate the three-dimensional coordinates of the target information (vertical, horizontal and depth). Accordingly, we combine TMS320DM642 evaluation module and two PTZ cameras to achieve the stereo vision system, the system used to track the target and measurement. When the system calculates the three-dimensional coordinates of the target, the system will control the third PTZ camera turn to point the target and correct the laser point. The third PTZ camera of the system uses for the laser rotating device, the top of the PTZ camera set up a laser pointer. Finally, the system will associate with internet for the DSP platform as a Server-side, the remote PC as a Client-side. The user can remotely control the system, to achieve the image monitoring of object tracking and measurement system by stereo vision.

    第一章緒論1 1.1研究動機與目的1 1.2研究方法2 1.3論文架構3 第二章系統架構5 2.2目標物偵測程序6 2.3目標物追蹤程序8 2.4立體座標追蹤模式與雷射指向追蹤模式10 2.5影像壓縮與網路傳輸程序11 2.6硬體規格與配置13 第三章目標物偵測程序18 3.1人頭目標物偵測程序18 3.1.1連續影像相減法19 3.1.2影像前處理21 3.1.3移動頭部預測法22 3.1.4移動邊緣偵測27 3.1.5橢圓偵測29 3.1.6SAD樣板比對32 3.1.7偵測人頭目標物34 3.2球目標物偵測程序35 3.2.1顏色萃取36 3.2.2色彩濾除41 3.2.3連接元件標記法43 3.2.4偵測球目標物46 第四章目標物追蹤程序48 4.1人頭目標物追蹤程序48 4.1.1預測位置與動態搜尋範圍49 4.1.2混合式追蹤演算法51 4.1.3追蹤人頭目標物53 4.2球目標物追蹤程序54 4.2.1動態搜尋範圍55 4.2.2色彩追蹤演算法58 4.2.3追蹤球目標物64 第五章立體座標追蹤模式與雷射指向追蹤模式68 5.1雙攝影機的架設68 5.2攝影機之影像校正70 5.3立體座標追蹤模式72 5.3.1雙眼視覺之特性72 5.3.2雙眼視覺量測景深74 5.3.3成像原理與焦距計算74 5.3.4三維座標量測77 5.3.5追蹤與量測目標物81 5.4雷射指向追蹤模式82 5.4.1控制攝影機切換83 5.4.2雷射指向修正原理86 5.4.3人頭雷射指向追蹤流程89 5.4.4使用EDMA進行影像減取樣102 5.4.5球雷射指向追蹤流程107 第六章影像壓縮與網路傳輸115 6.1影像壓縮115 6.2網路傳輸118 6.3遠端使用者介面120 6.4網路傳輸程序122 第七章系統實現124 7.1程式軟體架構124 7.2系統程序127 7.3系統效能測試143 7.4量測誤差144 第八章結論146 8.1研究成果146 8.2未來發展方向151 參考文獻153

    [1] T. L. Hwang and J. J. Clark, “On Local Detection of Moving Edge,” Proceeding of IEEE International Conference on Pattern Recognition, Vol. 1, pp. 180-184, 1990.
    [2] S. Yalamanchili, W. N. Martin and J. K. Aggarwal, “Extraction of Moving Object Description Via Differencing,” Computer Graphics and Image Processing, Vol. 18, pp. 188-201, 1982.
    [3] S. Birchfield, “An Elliptical Head Tracker,” Proceeding of the 31st Asilomar Conference on Singals, Systems and Computers, pp. 1710-1714, 1997.
    [4] T. K. Kuo and L. C. Fu, “Zoom-Based Head Tracker in Complex Environment,” Proceeding of IEEE International Conference on Control Applications, Vol. 2, pp. 725-730, 2002.
    [5] R. Brunelli and T. Poggio, “Template Matching: Matched Spatial Filters and Beyond,” Pattern Recognition, Vol. 30, No. 5, pp. 751-768, 1997.
    [6] M. S. Lew, N. Sube and T. S. Huang, “Improving Visual Matching,” Proceeding of IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 58-65, 2000.
    [7] C. R. Wren, A. Azarbayejani, T. Darrell and A. Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, pp. 780-785, 1997.
    [8] I. Haritaoglu, D. Harwood and L. S. Davis, “Hydra: Multiple People Dectction and Tracking Using Silhouettes,” Proceeding of the Second IEEE Workshop on Visual Surveillance, pp. 6-13, 1999.
    [9] B. K. P. Horn and B. G. Schunck, “Determining Optical Flow,” Artificial Intelligence, Vol. 1-3, pp. 185-203, 1981.
    [10] R. A. Bioe and I. J. Cox, “An Analysis of Camera Noise,” IEEE Transactions on Pattern Analysis Machine Intelligence, Vol. 14, No. 6, pp. 671-674, 1992.
    [11] D. Hear and M. P. Baker, Computer Graphics, 2nd Edition, Prentice Hall, Now York, pp. 49-81. 1994.
    [12] J. F. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis Machine Intelligence, Vol. PAMI-8, pp. 679-698, 1986.
    [13] R. J. Qian and T. S. Huang, “Optical Edge Detection in Two-Dimensional Image,” IEEE Transactions on Image Processing, Vol. 5, No. 7, pp. 1215-1220, 1996.
    [14] P. J. Burt and E. H. Adelson, “The Laplacian Pyramid as a Compact Image Code,” IEEE Transactions on Communications, Vol. 31, No. 4, pp. 532-540, 1983.
    [15] N. Wanopoulos, N. Vasanthavada and R. L. Baker, “Design of an Image Edge Detection Filter Using the Sobel Operator,” IEEE Journal of Solid-State Circuits, Vol. 23, No. 2, pp. 358-367, 1988.
    [16] D. M. Tsai and C. T. Lin, “Fast Normalized Cross Correlation for Defect Detection,” Patten Recognition Letters, Vol. 24, pp. 2625-2631, 2003.
    [17] D. Chai and K. N. Ngan, “Face Segmentation Using Skin-Color Map in Videophone Applications,” IEEE Transaction on Circuits and Systems for Video Technology, Vol. 9, No. 4, pp. 551-564, 1999.
    [18] P. Y. Chen, C. M. Huang and L. C. Fu, “A Robust Visual Servo System for Tracking an Arbitrary-Shaped Object by a New Active Contour Method,” Proceeding of the 2004 American Control Conference, Vol. 2, pp. 1516-1521, 2004.
    [19] H. Chu, S. Ye and Q. Guo, “Object Tracking Algorithm Based on CAMShift Algorithm Combinating with Difference in Frame,” Proceeding of IEEE International Conference on Automation and Logistics, pp. 51-55, 2007.
    [20] L. Sun and B. Wang, “Real-Time Non-Rigid Object Tracking Using CAMShift with Weighted Back Projection,” Proceeding of IEEE International Conference on Computational Science and Its Applications, pp. 86-91, 2010.
    [21] X. Liu, H. Chu and P. Li, “Research of the Improved CAMShift Tracking Algorithm,” Proceeding of IEEE International Conference on Mechatronics and Automation, pp. 968-972, 2007.
    [22] S. Jin and J. Cho, “FPGA Design and Implementation of a Real-Time Stereo Vision System,” IEEE Transaction on Circuits and Systems for Video Technology, Vol. 20, pp. 15-26, 2010.
    [23] B. Jahne and P. Geissler, “Depth from Focus with One Image,” Proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 713-717, 1994.
    [24] M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in Computational Stereo,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 25, pp. 993-1008, 2003.
    [25] S. Birchfield and C. Tomasi, “Depth Discontinuities by Pixel-to-Pixel Stereo,” Proceeding of Sixth International Conference on Computer Vision, pp. 1073-1080, 1998.
    [26] Texas Instruments Inc., TVP5150PBS Ultralow-Power NTSC/PAL Video Decoder, 2006.
    [27] Texas Instruments Inc., JPEG Network on The DM642 EVM, 2006.
    [28] 葉佳翔,“立體視覺之目標物追蹤與量測系統,”國立台灣科技大學電機工程系碩士論文, 2010.

    QR CODE