簡易檢索 / 詳目顯示

研究生: 黃上哲
Shang-Che Huang
論文名稱: 利用單眼中的虹膜輪廓估算注視點
Point-of-regard Estimation via Iris Contour in Single Eye
指導教授: 吳怡樂
Yi-Leh Wu
口試委員: 何瑁鎧
Maw-kae Hor
唐政元
Cheng-Yuan Tang
陳建中
Jiann-Jone Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2010
畢業學年度: 98
語文別: 英文
論文頁數: 34
中文關鍵詞: 視線偵測注視點人機互動
外文關鍵詞: point-of-regard, iris contour fitting, 3D locations of circle/ellipse
相關次數: 點閱:272下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 視線追蹤是個廣泛應用在人機互動上的技術,在偵測到使用者視線所看的方
    向後,視線與目標所在平面的交點即為注視點(使用者所注視位於平面上的點)。
    在此方法中有一個較特別的假設:將虹膜的輪廓視為橢圓,而不是視為圓。
    除此之外,我們的方法在執行過程中只需要利用到一隻眼睛。經由一些影像處理
    的技術和利用三維空間中圓與橢圓的幾何特性,我們可以決定唯一的視線。
    在此論文中我們證明了圖片在低解析度的情況下,實驗結果的準確率會比在
    高解析度的情況下低。此外,眼瞼過於明顯的部分也會導致結果的誤差。然而,
    不論在解析度或是眼瞼的影響之下,我們都能夠穩健地將計算出的所有注視點區
    分成四個區域。


    Eye gaze tracking is a technique which is commonly used in human-computer
    interaction. By determining the eye gaze, the point-of-regard can be estimated by intersecting the gaze line and the target plane.
    A particular assumption is that irises are regarded as ellipses rather than circles in our approach. Besides, only one eye is required in the processes of the approach. The unique eye gaze can be computed via some image processing techniques and exploiting geometric properties of circles and ellipses in the 3D space.
    In this thesis we show that with low resolution images, the accuracy of the
    results is reduced than with high resolution ones. Furthermore, conspicuous eyelid edges may lead the results to be unsatisfied. However, four distinct clusters can be separated robustly with both the influences of resolution and eyelids.

    論文摘要 ............................................. I Abstract ........................................... II Content ............................................ III List of Figures .................................... IV List of Tables ..................................... V 1. Introduction .................................. 1 2. Related Work ................................... 3 3. Eyeball Model .................................. 6 4. Eye Gaze and Point-of-regard Estimation ........ 7 4.1 Eye Region Extraction ........................ 8 4.2 Iris Edges Detection ......................... 9 4.3 Iris Contour Fitting ......................... 9 4.4 Unique Eye Gaze Determination ................ 12 4.5 Point-of-regard on a plane ................... 15 5. Experiment ..................................... 16 5.1 Experimental Environment ..................... 16 5.2 Collected Dataset ............................ 16 5.3 Result ....................................... 17 6. Conclusion and Future Work ..................... 23 References ......................................... 24

    [1]Paul Viola, Michael Jones, “Robust Real-time Object Detection,” International Journal of Computer Vision, vol.57, no.2, pp.137-154, Jul. 2001.
    [2]Jian-Gang Wang, Eric Sung, Ronda Venkateswarlu, “Estimating the eye gaze from one eye,” Computer Vision and Image Understanding, vol.98, no.1, pp.83-103, Apr. 2005.
    [3]Andrew T. Duchowski, “A Breadth-First Survey of Eye Tracking Applications,” Behavior Research Methods, Instruments, and Computers, vol.34, no.4, pp.455-470, Nov. 2002.
    [4]Dan Witzner Hansen, Qiang Ji, “In the Eye of the Beholder: A Survey of Models for Eyes and Gaze,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.32, no.3, pp.478-500, Mar. 2010.
    [5]Abdallahi Ould Mohamed, Matthieu Perreira Da Silva, Vincent Courboulay, “A history of eye gaze tracking,” hal-00215967, version 1, Dec. 2007.
    [6]Andrew T. Duchowski, Eye Tracking Methodology: Theory and Practice. Springer-Verlag, Jul. 2007.
    [7]Carlos H. Morimoto, Marcio R.M. Mimica, “Eye gaze tracking techniques for interactive applications,” Computer Vision and Image Understanding, Elsevier Science Incorporation, vol.98, no.1, pp.4-24, Apr. 2005.
    [8]Karlene Nguyen, Cindy Wagner, David Koons, Myron Flickner, “Differences in the Infrared Bright Pupil Response of Human Eyes,” Eye Tracking Research and Application, Proceedings of symposium on, 2002.
    [9]Susan K. Schnipke, Marc W. Todd, “Trials and Tribulations of Using an Eye-tracking System,” Conference on Human Factors in Computing Systems, ACM SIGCHI, pp. 273-274, 2000.
    [10]Kyung-Nam Kim, R. S. Ramakrishna, “Vision-Based Eye-Gaze Tracking for Human computer Interface,” Systems, Man, and Cybernetics, IEEE International Conference on, vol.2, pp.324-329, 1999.
    [11]Haiyuan Wu, Yosuke Kitagawa, Toshikazu Wada, Takekazu Kato, Qian Chen, “Tracking Iris Contour with a 3D Eye-Model for Gaze Estimation,” Proceedings of the 8th Asian conference on Computer vision, Part I, pp.688-697, 2007.
    [12]Jian-Gang Wang, Eric Sung, “Gaze determination via images of irises,” Image and Vision Computing, vol.19, no.2, pp.891-911, Oct. 2001.
    [13]Rainer Lienhart, Jochen Maydt, “An Extended Set of Haar-like Features for Rapid Object Detection,” Image Processing, International Conference on, vol.1, no.1, pp.900-903, 2002.
    [14]Jian-Gang Wang, Eric Sung, “Pose determination of human faces by using vanishing points,” Pattern Recognition, vol.34, no.12, pp.2427-2445, Dec. 2001.
    [15]Robert A. McLaughlin, “Randomized Hough Transform: Improved ellipse detection with comparison,” Pattern Recognition Letters, vol.19, no.3-4, pp.299-305, Mar. 1998.
    [16]Lei Xu, Erkki Oja, Pekka Kultanen, “A new curve detection method: Randomized Hough Transform (RHT),” Pattern Recognition Letters, vol.11, no.5, pp.331-338, May 1990.
    [17]Samuel Inverso, “Ellipse Detection Using Randomized Hough Transform,” May 2002.
    [18]http://en.wikipedia.org/wiki/Conic_section
    [19]Chun-Ta Ho, Ling-Hwei Chen, “A High-Speed Algorithm for Elliptical Object Detection,” Image Processing, IEEE Transactions on, vol.5, no.3, pp.547-550, Mar. 1996.
    [20]Andrew Fitzgibbon, Maurizio Pilu, Robert B. Fisher, “Direct Least Square Fitting of Ellipse,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.21, no.5, pp.476-480, May 2002.
    [21]Y.C. Shiu, Shaheen Ahmad, “3D Location of Circular and Spherical Features by Monocular Model-Based Vision,” Systems, Man and Cybernetics, Conference Proceedings, IEEE International Conference on, vol. 2, pp. 576-581, Nov. 1989.

    QR CODE