簡易檢索 / 詳目顯示

研究生: 羅金松
Chin-sung Lo
論文名稱: 一個結合全方位與PTZ雙攝影機的高畫質全景人臉視覺追蹤系統
A High-Definition Panoramic Visual Tracking System for Human Faces Using the Fusion of Omnidirectional and PTZ Cameras
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 范國清
Kuo-Chin Fan
傅楸善
Chiou-Shann Fuh
王榮華
Jung-Hua Wang
林其禹
Chyi-Yeu Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2007
畢業學年度: 95
語文別: 英文
論文頁數: 68
中文關鍵詞: 人臉追蹤時間差異法膚色篩選器粒子濾除器全方位攝影機PTZ攝影機
外文關鍵詞: face tracking, temporal differencing, skin color filtering, particle filter, omnidirectional camera, PTZ camera
相關次數: 點閱:208下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 全方位攝影機因為能提供360度的場景資訊而被廣泛的應用在影像監控及機器人視覺上,但是全方位攝影機有一個明顯的缺點:僅能低解析度的影像,使得離此攝影機較遠的物體無法被正確的辨識。為了克服這個問題,我們提出一個結合全方位攝影機和PTZ攝影機的協力視覺追蹤系統,它係藉由全方位攝影機收到的全景影像裡偵測及追蹤人臉,並控制PTZ攝影機去注視所選擇的人臉,以獲得高解析度的人臉影像。首先,人臉偵測的程序係利用時間差異法及膚色篩選器取得移動的人臉;然後,被偵測到的人臉會傳送給利用粒子濾除器以達到即時性人臉追蹤需求的人臉追蹤程序,當目標人臉被選擇了後,PTZ攝影機會快速地被引導去注視目標並放大之。接著,人臉追蹤的程序改由PTZ攝影機接收到的影像繼續進行人臉追蹤直到目標人臉離開監視範圍;再將人臉追蹤程序切換回使用全方位影像重新開始追蹤,而達到雙攝影機協力追蹤人臉的閉迴路系統。於實驗結果顯示,我們所提方法的人臉追蹤正確率在一般的情況下高於 95%,而在啟動PTZ追蹤的情況下高於 82%。整體的系統效能在未啟動PTZ追蹤的條件下達到每秒20個畫面的速度,而在啟動PTZ攝影機追蹤後仍然可達到每秒5個畫面的速度。根據本篇論文所發展的系統在人機介面及影像監控方面相當有助益。


    The omnidirectional cameras providing 360 degrees field of view is widely used in video surveillance and robot vision applications. However, the omnidirectional cameras have an obvious drawback; that is, only low-resolution images. Therefore, the objects are not able to be correctly identified if they are far from the omnidirectional cameras. To overcome this problem, we propose a close-loop face tracking system using the fusion of omnidirectional and PTZ cameras. Our system first detects and tracks human faces in the panoramic images received from an omnidirectional camera, and then controls the PTZ camera to fixate at the selected face to acquire a high-resolution image. At the beginning, the human faces detection procedure obtains moving human faces by means of temporal differencing and skin color filtering. The detected human faces are subsequently passed to the face tracking procedure which employs a particle filter for reactively tracking human faces. As a target human face is selected, the PTZ camera is directed to fixate at the target and zoom in it speedily. Then the face tracking procedure turns to use the images received from the PTZ camera for continuously tracking the target human face until it is lost. In consequence, the face tracking procedure will switch back to use the panoramic images to achieve a close-loop face tracking system. The experimental results reveal that the face tracking rate is more than 95% in normal environments and 82% when PTZ camera tracking is enabled. The overall system performance reaches 25 frames per second without PTZ enabled tracking and 5 frames per second with PTZ enabled tracking. This system is quite useful in human interaction and video surveillance tasks.

    中文摘要 ABSTRACT CONTENTS LIST OF FIGURES LIST OF TABLES Chapter 1 INTRODUCTION 1.1 Overview 1.2 Background 1.3 Motivation 1.4 System Description 1.5 Thesis Organization Chapter 2 RELATED WORKS 2.1 Reviews of Face Detection and Tracking 2.2 Reviews of Omnidirectional Cameras Approaches 2.3 Reviews of Omnidirectional and PTZ Cameras Approaches Chapter 3 OMNIDIRECITIONAL AND PTZ CAMERAS 3.1 Omnidirectional Cameras 3.2 PTZ Cameras Chapter 4 CAMERA CALIBRATION 4.1 Omnidirectional Camera Calibration 4.1.1 Effective image region 4.1.2 Image transformation 4.2 PTZ Camera Calibration Chapter 5 FACE DETECTION AND TRACKING 5.1 Face Detection 5.1.1 Temporal differencing 5.1.2 Skin and hair color filtering 5.2 Face Tracking 5.2.1 The particle filter 5.2.2 Our proposed model 5.3 Close-loop Face Tracking Chapter 6 EXPERIMENTAL RESULTS 6.1 The Results of Distance Estimation 6.2 The Results of Face Detection 6.3 The Results of Face Tracking Chapter 7 CONCLUSIONS AND FUTURE WORK REFERENCES

    [1] P. Biber, S. Fleck, and T. Duckett, “3D modeling of indoor environments for a robotic security guard,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, vol. 3, pp. 124-130, 2005.
    [2] T. Asai, M. Kanbara, and N. Yokoya, “3D modeling of outdoor environments by integrating omnidirectional range and color images,” in Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, Ottawa, Canada, pp. 447-454, 2005.
    [3] A. A. Argyros, D. P. Tsakiris, and C. Groyer, “Biomimetic centering behavior – mobile robots with panoramic sensors,” IEEE Transactions on Robotics and Automation, vol. 11, no. 4, pp. 21-30, December 2004.
    [4] Z. Zhu, D. R. Karuppiah, E. M. Riseman, and A. R. Hanson, “Keeping smart, omnidirectional eyes on you–adaptive panoramic stereovision for human tracking and localization with cooperative robots,” IEEE Transactions on Robotics and Automation, vol. 11, no.4, pp. 69-78, 2004.
    [5] Y. Yagi, K. Imai, K. Tsuji, and M. Yachida, “Iconic memory-based omnidirectional route panorama navigation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 1, pp. 78-87, 2005.
    [6] H. Nagahara, Y. Yagi, and M. Yachida, “Superresolution modeling using an omnidirectional imaging sensor,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 33, no. 4, pp. 607-615, 2003.
    [7] G. Scotti, L. Marcenaro, C. Coelho, F. Selvaggi, and C.S. Regazzoni, “Dual camera intelligent sensor for high definition 360 degrees surveillance,” IEE Proceeding of Vision Image Signal Process, vol. 152, no. 2, pp. 250-257, 2005.
    [8] Y. Yao, B. Abidi, and M. Abidi, “Fusion of omnidirectional and PTZ cameras for accurate cooperative tracking,” in Proceedings of the IEEE International Conference on Video and Signal Based Surveillance, Sydney, Australia, pp.46-46, 2006.
    [9] T. Kurita, H. Shimai, Y. Baba, T. Mishima, M. Tanaka, S. Akaho, and S. Umeyama, “Gaze control on visual active vision system with binocular fish-eye lenses,”, in Proceedings of the IEEE Conference on Systems, Man, and Cybernetics, Nashville, USA, vol. 3, pp. 1644-1649, 2000.
    [10] S. Li, M. Nakano, and N. Shiba, “Acquisition of spherical image by fish-eye conversion lens,” in Proceedings of the Virtual Reality Conference, Chicago, IL, USA, vol. 3, pp. 235-236, 2004.
    [11] F. D. L. Torre, C. Vallespi, P. E. Rybski, M. Veloso, and T. Kanade, “Omnidirectional video capturing, multiple people tracking and recognition for meeting understanding,” Technical reports, Robotics Institute, Carnegie Mellon University, 2005.
    [12] Vstone Corporation, 2005. “Vstone corporation”, 2003. http://www.vstone.co.jp
    [13] J. Kannala and S. Brandt, “A generic camera calibration method for fish-eye lenses,” in Proceedings of the International Conference on Pattern Recognition, Cambridge, UK, pp. 10-13, 2004,.
    [14] S. K. Nayar, “Catadioptric omnidirectional camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 482-488, 1997.
    [15] C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems and practical implications,” in Proceedings of European Conference on Computer Vision, Dublin, Ireland, pp. 445-461, 2000.
    [16] S. Hrabar, and G. S. Sukhatme, “Omnidirectional vision for an autonomous helicopter,” in Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, Taiwan, vol. 1, pp. 558-561, 2003.
    [17] H. Liu, N. Dong, and H. Zha, “Omni-directional vision based human motion detection for autonomous mobile robot,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Hawaii, USA, vol. 3, pp. 2236-2241, 2005.
    [18] D. Southwell, B. Vandegriend, and A. Basu, “A conical mirror pipeline inspection system,” in Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, USA, vol. 4, pp. 3253-3258, 1996.
    [19] V. Pefi and S. K. Nayar, “Omnidirectional video system", in Proceeding of US-. Japan Graduate Student Forum in Robotics, Osaka, Japan, pp. 28-31, 1996
    [20] E. L. L. Cabral, J. C. de Souza, and M. C. Hunold, “Omnidirectional stereo vision with a hyperbolic double lobed mirror,” in Proceedings of the International Conference on Pattern Recognition, Cambridge, UK, vol. 1, pp. 1-9, 2004.
    [21] M. H. Yang, D. Kriegman, and N. Ahuja, “Detecting faces in images: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, 2002.
    [22] G. Yang and T. S. Huang, “Human face detection in complex background,” Pattern Recognition, vol. 27, no. 1, pp. 53-64, 1994.
    [23] D. Chai and A. Bouzerdoum, “A Bayesian approach to skin color classification in YCbCr color space,” in Proceedings of the IEEE Region Ten Conference, Kualalumpur, Malaysia, vol. 2, pp. 421-424, 2000.
    [24] A. Lanitis, C. J. Taylor, and T. F. Cootes, “An automatic face identification system using flexible appearance models,” Image and Vision Computing, vol. 13, no. 5, pp. 393-401, 1995.
    [25] R. Vaillant, C. Monrocq, and Y. Le Cun “An original approach for the localization of objects in images,” in Proceedings of the IEEE Conference on Artificial Neural Networks, Edinburgh, UK, pp. 245-250, 1994.
    [26] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscence, vol. 3, no. 1, pp. 71-86, 1991.
    [27] G. L. Foresti, C. Micheloni, L. Snidaro, and C. Marchiol, “Face detection for visual surveillance,” in Proceedings of the 12th IEEE International Conference on Image Analysis and Processing, Mantova, Italy, pp.115-120, 2003.
    [28] C. Tomasi, and T. Kanade, “ Detection and tracking of point features,” Technical Report CMU-CS-91-132, Carnegie Mellon University, Pittsburgh, PA, pp. 147-150, 1991.
    [29] A. D. Bagdanov, A. D. Bimbo, and W. Nunziati, “ Improving evidential quality of surveillance imagery through active face tracking,” in Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, vol. 3, pp. 1200-1203, 2006.
    [30] T. Funahashi, T. Fujiwara, and H. Koshimizu, “Coarse to fine hierarchical tracking system for face recognition,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hawaii, USA, vol. 4, pp. 3454-3459, 2005.
    [31] K. C. Yow and R. Cipolla, “Feature-based human face detection,” Image and Vision Computing, vol. 15, no. 9, pp. 712-735, 1997.
    [32] S. Birchfield, “Elliptical head tracking using intensity gradients and color histograms,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, pp. 232-237, 1998.
    [33] K. H. An, D. H. Yoo, S. U. Jung, and M. J. Chung, “Robust multi-viewface tracking,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Alberta, Canada, pp. 1905-1910, 2005.
    [34] P. Fieguth, and D. Terzopoulos. “Color-based tracking of heads and other mobile objects at video frame rates,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 21-27, 1997.
    [35] H. P. Graf, E. Cosatto, D. Gibbon, and M. Kocheisen, “Multi-modal system for locating heads and faces,” in Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition, pp. 88-93, 1996.
    [36] K. Y. Wang, “A Real-Time Face Tracking and Recognition System Based on Particle Filtering and AdaBoosting Techniques,” Master Thesis, Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, 2006.

    無法下載圖示 全文公開日期 2012/07/23 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE