簡易檢索 / 詳目顯示

研究生: 朱姿穎
Tsu-ying Chu
論文名稱: 以相關性濾波器進行變化光源下之人臉辨識
Correlation Filter for Face Recognition Across Illumination
指導教授: 徐繼聖
Gee-Sern Hsu
口試委員: 洪一平
Yi-Ping Hung
李明穗
Ming-Sui Lee
郭景明
Jing-Ming Guo
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 91
中文關鍵詞: 人臉辨識相關性濾波器類相依性特徵分析
外文關鍵詞: Face recognition, Correlation filter, Class-Dependence feature analysis
相關次數: 點閱:188下載:12
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究的重點在於如何改善並提升變化光源下的人臉辨識率。 處理變化光源下的臉部辨識,一般包含三項步驟: (1)光源正規化; (2)特徵擷取; (3)分類器設計。 本論文由22種光源正規化方法中挑選具有潛力的3種進行評估比較。 因Vijayakumar 等人提出之最小平均相關能量濾波器 (Minimum Average Correlation Energy Filter, MACE Filter),搭配非線性的類相依性特徵分析 (Kernel Class-Dependence Feature Analysis, KCFA),在 Face Recognition Grand Challenge (FRGC) 標準測試中達到較佳之辨識率, 所以我們應用了 MACE 濾波器與 KCFA 進行特徵擷取。 並配合支持向量機 (Support Vector Machines)分類器進行辨識。 本文著重於特徵擷取的範圍對辨識率造成的影響。 雖然一般均相信人臉範圍應侷限於眼睛、 鼻和嘴附近, 不應該包括髮線與人臉輪廓, 以避免擷取與辨識無關之頭髮和背景。 但本研究卻發現髮線與人臉輪廓有助於提升辨識率。 另外我們也比較了全域與不同範圍之局部區域的辨識效能, 發現額頭兩側與眉毛亦有助於提升辨識率。 實驗部分遵照 FRGC 所規範之測試標準與樣本, 依 Vijayakumar 等人所提出的方法在64×64的影像上得到72.91%, 利用本研究所推薦之光源正規化, 可提升至84.83%,再考慮包括髮線與輪廓之範圍,可再提升至88.17%。


    Face recognition across illumination variation involves illumination normalization, feature extraction and classification. This research compares a few state-of-the-art illumination normalization methods, and selects the most potential one. We also investigate the impacts made by different facial regions on the recognition performance. Many believe that the facial region considered for face recognition is better bounded within the facial contour to minimize the degradation due to background and hair. However, we have found that the inclusion of the boundary of the forehead, contours of the cheeks, and the contour of the chin can effectively improve the performance. Minimum average correlation energy filter (MACE) combined with kernel class-dependence feature analysis (KCFA) is proven an effective solution, and therefore is adopted in this study with some minor modification. Following the protocol FGRC 2.0, the recognition rate can be improved from 72.91% to 84.83% using the recommended illumination normalization, and further improved to 88.17% with the recommended facial region.

    摘要 i Abstract ii 誌謝 iii 目錄 iv 圖目錄 vii 表目錄 x 1 介紹 1 1.1 緒論與研究動機 1 1.1.1 光源正規化 2 1.1.2 特徵擷取 4 1.1.3 研究動機 6 1.2 方法概述 7 1.3 論文貢獻 8 1.4 論文架構 8 2 相關文獻探討 10 2.1 光源正規化相關理論 (Illumination Normalization) 10 2.1.1 The Modified Anisotropic Diffusion Normalization Technique 10 2.1.2 The DCT Based Normalization Technique 12 2.1.3 The Tan and Triggs Normalization Technique 16 2.2 相關性濾波器 (Correlation Filter) 23 2.2.1 Correlation Filter的介紹 23 2.2.2 最小平均相關能量濾波器 (Minimum Average Correlation Energy Filter) 26 2.2.3 Peak-to-Sidelobe Ratio(PSR) 28 3 特徵擷取與局部特徵 31 3.1 類相依性特徵分析 (Class-Dependence Feature Analysis) 31 3.1.1 Kernel Class-Dependence Feature Analysis (KCFA) 34 3.2 局部區塊之分析 36 4 實驗設計與結果呈現 40 4.1 FRGC資料庫介紹 40 4.1.1 樣本收集 40 4.1.2 FRGC實驗設計 41 4.2 樣本規格 43 4.3 實驗設計 44 4.4 實驗結果 45 4.4.1 KCFA參數調整與光源正規化測試結果 46 4.4.2 全域人臉區塊範圍 49 4.4.3 局部區塊分析 52 4.4.4 各類文獻效能比較 56 5 即時系統應用 58 5.1 系統架構 58 6 結論與未來研究方向 60 參考文獻 61 附錄 66 A INFace tool 67 B PhD Toolbox 70 C 離散傅立葉轉換 (Discrete Fourier Transform) 73 D 支持向量機 (Support Vector Machine) 74

    [1] P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Computer Vision and Pattern Recognition, 2005. IEEE Computer Society Conference on, vol. 1, pp. 947 –954, June 2005.
    [2] B. Scholkopf, A. Smola, and K.-R. Muller, “Kernel principal component analysis,” in Artificial Neural Networks - ICANN’97 (W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, eds.), vol. 1327, pp. 583 –588, 1997.
    [3] S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K. Mullers, “Fisher discriminant analysis with kernels,” in Neural Networks for Signal Processing IX, 1999. Proceedings of the 1999 IEEE Signal Processing Society Workshop, pp. 41 –48, August 1999.
    [4] “Face Recognition Homepage.” http://www.face-rec.org/.
    [5] “INFace Toolbox.” http://luks.fe.uni-lj.si/sl/osebje/vitomir/face_tools/INFace/refs.html.
    [6] R. Gross and V. Brajovic, “An image preprocessing algorithm for illumination invariant face recognition,” in Proceedings of the 4th international conference on Audio- and video-based biometric person authentication, pp. 10–18, 2003.
    [7] W. Chen, M. J. Er, and S. Wu, “Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 36, pp. 458 –466, april 2006.
    [8] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” Image Processing, IEEE Transactions on, vol. 19, June 2010.
    [9] M. Savvides, R. Abiantun, J. Heo, S. Park, C. Xie, and B. Vijayakumar, “Partial holistic face recognition on frgc-ii data using support vector machine,” in Computer Vision and Pattern Recognition Workshop, 2006. CVPRW ’06. Conference on, p. 48, June 2006.
    [10] B. Kumar, M. Savvides, and C. Xie, “Correlation pattern recognition for face recognition,” Proceedings of the IEEE, vol. 94, pp. 1963 –1976, November 2006.
    [11] X. Xie, W.-S. Zheng, J. Lai, P. Yuen, and C. Suen, “Normalization of face illumination based on large-and small-scale features,” Image Processing, IEEE Transactions on, vol. 20, pp. 1807 –1821, July 2011.
    [12] B.Wang,W. Li,W. Yang, and Q. Liao, “Illumination normalization based on weber’s law with application to face recognition,” Signal Processing Letters, IEEE, vol. 18, pp. 462 –465, August 2011.
    [13] A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 23, pp. 643 –660, June 2001.
    [14] M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, pp. 71 –86, January 1991.
    [15] D. Blackburn, M. Bone, and P. Phillips, “Facial recognition vendor test 2000: evaluation report,” 2000.
    [16] Wikipedia, “Discrete Cosine transform.” http://en.wikipedia.org/wiki/Discrete_cosine_transform.
    [17] T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) database,” in Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on,May 2002.
    [18] P. Shih and C. Liu, “Evolving effective color features for improving frgc baseline performance,” in Computer Vision and Pattern Recognition -Workshops, 2005. CVPRWorkshops. IEEE Computer Society Conference on, p. 156, June 2005.
    [19] P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, pp. 711 –720, July 1997.
    [20] J. Zhao, H. Wang, H. Ren, and S.-C. Kee, “Lbp discriminant analysis for face verification,” in Computer Vision and Pattern Recognition - Workshops, 2005. CVPRWorkshops. IEEE Computer Society Conference on, p. 167, June 2005.
    [21] H. Yang and Y.Wang, “A lbp-based face recognition method with hamming distance constraint,” in Image andGraphics, 2007. ICIG 2007. Fourth International Conference on, pp. 645 –649, August 2007.
    [22] Wikipedia, “Discrete Fourier transform.” http://http://en.wikipedia.org/wiki/Discrete_Fourier_transform.
    [23] M. Sawides, B. Kumar, and P. Khosla, “”corefaces” - robust shift invariant pca based correlation filter for illumination tolerant face recognition,” in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 2, pp. II–834 – II–841, June-2 July 2004.
    [24] V. N. Vapnik, The nature of statistical learning theory. Springer-VerlagNew York, Inc., 1995.
    [25] P. J. Phillips, “Support vector machines applied to face recognition,” in Advances in Neural Information Processing Systems 11, pp. 803 –809, 1999.
    [26] Z. Liu and C. Liu, “A hybrid color and frequency features method for face recognition,” Image Processing, IEEE Transactions on, vol. 17, pp. 1975 –1980, October 2008.
    [27] Z. Liu and C. Liu, “Fusion of the complementary discrete cosine features in the yiq color space for face recognition,” Computer Vision and Image Understanding, vol. 111, pp. 249 –262, 2008.
    [28] C. Liu and H. Wechsler, “Robust coding schemes for indexing and retrieval from large face databases,” Image Processing, IEEE Transactions on, vol. 9, January 2000.
    [29] A. Mahalanobis, B. V. K. V. Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt., vol. 26, pp. 3633 –3640, September 1987.
    [30] B. V. K. M. Savvides and P. Khosla, “Face verification using correlation filters,” Proc. Of the Third IEEE Automatic Identification Advanced Technologies, pp. 56 –61,March 2002.
    [31] A. Lugt, “Signal detection by complex spatial filtering,” Information Theory, IEEE Transactions on, vol. 10, April 1964.
    [32] C. F. Hester and D. Casasent, “Multivariant technique for multiclass pattern recognition,” Appl. Opt., vol. 19, pp. 1758 –1761, June 1980.
    [33] R. Abiantun, M. Savvides, and B. Vijaya Kumar, “How low can you go? low resolution face recognition study using kernel correlation feature analysis on the
    frgcv2 dataset,” in Biometric Consortium Conference, 2006 Biometrics Symposium: Special Session on Research at the, pp. 1 –6, August 2006.
    [34] C. Xie, M. Savvides, and B. Kumar, “Redundant class-dependence feature analysis based on correlation filters using frgc2.0 data,” in Computer Vision and Pattern Recognition -Workshops, 2005. CVPRWorkshops. IEEE Computer Society Conference on, p. 153, June 2005.
    [35] C. Xie, M. Savvides, and B. VijayaKumar, “Kernel correlation filter based redundant class-dependence feature analysis (kcfa) on frgc2.0 data,” in Analysis and Modelling of Faces and Gestures, vol. 3723, pp. 32 –43, 2005.
    [36] B.Wandel, Foundations of Vision. Sunderland MA: Sinauer, 1995. [37] R. C. Gonzalez and R. E. Woods, Digital Image Processing (3rd Edition). Prentice-Hall, Inc., 2006.
    [38] Y. Adini, Y. Moses, and S. Ullman, “Face recognition: the problem of compensating for changes in illumination direction,” Pattern Analysis andMachine Intelligence, IEEE Transactions on, vol. 19, pp. 721 –732, July 1997.
    [39] B. K. P. Horn, Robot Vision. Cambridge, MA: MIT Press, 1986.
    [40] E. H. Land and J. J. McCANN, “Lightness and retinex theory,” J. Opt. Soc. Am., vol. 61, pp. 1 –11, January 1971.
    [41] S. K. Nayar and R. M. Bolle, “Reflectance based object recognition,” International Journal of Computer Vision, vol. 17, pp. 219 –240, 1996.
    [42] B. V. K. V. Kumar, “Minimum-variance synthetic discriminant functions,” J.Opt. Soc. Am. A, vol. 3, pp. 1579 –1584, October 1986.
    [43] J. Zou, Q. Ji, and G. Nagy, “A comparative study of local matching approach for face recognition,” Image Processing, IEEE Transactions on, vol. 16, pp. 2617 –2628, October 2007.
    [44] M. Savvides, J. Heo, R. Abiantun, C. Xie, and B. Kumar, “Class dependent kernel discrete cosine transformfeatures for enhanced holistic face recognition in frgc-ii,” in Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, vol. 2, p. II,May 2006.
    [45] “PhD Toolbox.” http://luks.fe.uni-lj.si/sl/osebje/vitomir/face_tools/PhDface.html.

    QR CODE