簡易檢索 / 詳目顯示

研究生: 葉俊材
Chun-Tsai Yeh
論文名稱: 利用主動外觀模型估計眼睛注視方向
A Study of Gazing Estimation Using Active Appearance Model
指導教授: 吳怡樂
Yi-Leh Wu
口試委員: 何瑁鎧
Maw-Kae Hor
唐政元
Cheng-Yuan Tang
鄧惟中
Wei-Chung Teng
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2011
畢業學年度: 99
語文別: 英文
論文頁數: 42
中文關鍵詞: 視線偵測人機互動主動外觀模型向量支援機
外文關鍵詞: eye gaze detection, human-computer interaction, Active Appearance Models, Support Vector Machine
相關次數: 點閱:244下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

近幾年人機互動成為現在最為熱門的研究,主要以身體動作、手勢和眼睛凝視方向做為互動時的依據,其中在眼睛凝視方向仍處於發展階段。本篇論文中,我們提出一個有效率的方式來解決眼睛凝視點的問題,方法中主要是去找出眼睛附近區塊;再利用主動外觀模型定出眼部特徵,最後透過向量支援機做出5個視線方向分類。
在各種辨識領域中主動外觀模型已被廣泛的使用,它主要藉由蒐集影像的形狀跟紋理建立出一個原始的模型去做影像的辨識與對應。由於它事先利用手動定義所要辨識影像的特徵點,所以對於該類影像的辨識準確率較高。
透過這個方法,我們透過眼睛所有的資訊來判定視線方向,本篇論文中我們將主動外觀模型68個臉部特徵點修改成為36個眼部特徵點,並透過以二維座標點做為向量支援機分類各個方向的依據,其中特徵點包含眼睛輪廓、虹膜大小、虹膜位置以及瞳孔位置。此外在目前較低解析度的相機下,本方法也能準確的判斷出視線的方向。


In recent years, human-computer interaction has become more popular research. Most methods use body movements, gestures and eye gaze direction as the basis for interaction. Gazing estimation is still an active research domain in recent years. In this paper, we present an efficient way to solve the problem of the eye gaze point. We locate the eye region by modifying the characteristics of the Active Appearance Model (AAM). The by employing the Support Vector Machine (SVM), we estimate the five gazing directions through classification.
The original 68 facial feature points in AAM are modified into 36 eye feature points. According to the two-dimensional coordinates of feature points, we classify different directions of eye gazing. The modified 36 feature points describe the contour of eye, iris size, iris location, and the position of pupil. In addition, the resolution of cameras does not affect our method to determine the direction of line of sight accurately. The final results show the independence of classifications, classification error, and accurate estimation of the gazing directions.

論文摘要.................................................................I Abstract...............................................................II Contents...............................................................III List of Figures........................................................IV List of Tables.........................................................V 1. Introduction........................................................1 2. Related Work........................................................3 3. Comparison and Analysis.............................................6 3.1 System Framework...................................................6 3.1.1 System Architecture..............................................6 3.1.2 The Concept of AAM...............................................7 3.1.3 Facial Features..................................................8 3.1.4 AAM Fitting Process..............................................9 3.1.5 The Support Vector Machine.......................................10 3.2 Using Facial Feature Points........................................11 3.3 Using Revised 16 Feature Points from the Eye Region................13 3.3.1 Eye Region Extraction............................................14 3.3.2 Eye Feature......................................................15 3.4 Revised Methods For SVM Prediction.................................17 3.4.1 Revised Classification...........................................18 3.4.2 Increase Feature Points from the Eye Region......................19 3.4.3 Comparison SVM Prediction Using Revised Methods..................19 3.5 AAM Library to AAM Tool............................................20 3.6 Using 36 Feature Points with more Precise Iris and Nose Contours...21 3.7 Normalized Feature Points..........................................24 4. Experiments.........................................................27 4.1 Experimental Environment...........................................27 4.2 Collected Datasets.................................................27 4.3 Result.............................................................28 5 Conclusion and Future Work...........................................32 Reference..............................................................33

[1]P. Viola and M. Jones, “Robust Real-time Object Detection,” International Journal of Computer Vision, vol. 57, no. 2, pp.137-154, Jul. 2001.
[2]T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681-685, 2001.
[3]C. Cortes and V. Vapnik “Support-Vector Networks,” Machine Learning, vol. 20, no. 3, pp.273-297, 1995.
[4]A. T. Duchowski, “A Breadth-First Survey of Eye Tracking Applications,” Behavior Research Methods, Instruments, and Computers, vol. 34, no. 4, pp.455-470, Nov. 2002.
[5]D. Beymer and M. Flickner, “Eye Gaze Tracking Using an Active Stereo Head, ” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp.451-458, 2003
[6]Carlos H. Morimoto and Marcio R.M. Mimica, “Eye gaze tracking techniques for interactive applications,” Computer Vision and Image Understanding, Elsevier Science Incorporation, vol. 98, no. 1, pp.4-24, Apr. 2005.
[7]D. W. Hansen and Q. Ji, “In the Eye of the Beholder: A Survey of Models for Eyes and Gaze,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, no. 3, pp.478-500, Mar. 2010.
[8]N. Murray and D.J. Roberts, “Comparison of head gaze and head and eye gaze within an immersive environment,” in Proc. DS-RT, pp.70-76, 2006
[9]“Charge-coupled device”, http://en.wikipedia.org/wiki/Charge-coupled_device, referenced on May 1st, 2011.
[10]Z. Zhu and Q. Ji “Novel Eye Gaze Tracking Techniques Under Natural Head Movement,” IEEE Transactions on biomedical engineering, vol. 54, no. 12, Dec. 2007
[11]F. L. Coutinho and C. H. Morimoto, "Free head motion eye gaze tracking using a single camera and multiple light sources," sibgrapi, pp.171-178, XIX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'06), 2006
[12]J.-G. Wang, E. Sung and R. Venkateswarlu, “Estimating the eye gaze from one eye,” Computer Vision and Image Understanding, vol. 98, no.1, pp.83-103, Apr. 2005.
[13]S.-C. Huang, Y.-L. Wu, W.-C. Hung and C.-Y. Tang, "Point-of-Regard Measurement via Iris Contour with One Eye from Single Image," ism IEEE International Symposium on Multimedia, pp.336-341, 2010
[14]I. Bacivarov, M. Ionita, and P. Corcoran, “Statistical Models of Appearance for Eye Tracking and Eye-Blink Detection and Measurement,” IEEE Transactions on Consumer Electronics, vol. 54, no. 3, pp.1312-1320, Aug. 2008.
[15]J. Zhu and J. Yang, "Subpixel Eye Gaze Tracking," fg, pp.0131, Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FG'02), pp.124-129, 2002
[16]E. D. Guestrin and M. Eizenman “General Theory of Remote Gaze Estimation Using the Pupil Center and Corneal Reflections,” IEEE Transactions on biomedical engineering, vol. 53, no. 6, June 2006
[17]T.F. Cootes, C.J. Taylor, D. Cooper, and J. Graham, “Active Shape Models-Their Training and Application,” Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38-59, Jan. 1995.
[18]G. Edwards, T. Cootes, and C. Taylor, “Face recognition using active appearance models,” Proceedings of the European Conference on Computer Vision, 2, pp. 581–695, 1998.
[19]C.-C. Chang and C.-J. Lin. “LIBSVM: a library for support vector machines,” Software available at http: //www.csie.ntu.edu.tw/˜cjlin/libsvm, 2001.
[20]“AAM Library”, http://code.google.com/p/aam-library/, referenced on May 1st, 2011.
[21]“OpenCV”, http://www.opencv.org.cn/index.php/, referenced on May 1st, 2011.
[22]“AAM Tool”, http://personalpages.manchester.ac.uk/staff/timothy.f.cootes/software/am_tools_doc/index.html, referenced on May 1st, 2011.
[23]http://en.wikipedia.org/wiki/Vector_product, referenced on May 1st, 2011.

QR CODE