研究生: |
李玲瑩 Ling-ying Lee |
---|---|
論文名稱: |
利用單一立體參考模型之多角度人臉辨識 Face Recognition Across Poses Using A Single Reference Model |
指導教授: |
徐繼聖
Gee-sern Hsu |
口試委員: |
林昌鴻
Chang-hong Lin 鍾聖倫 Sheng-luen Chung 楊士萱 Shih-hsuan Yang |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 機械工程系 Department of Mechanical Engineering |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 中文 |
論文頁數: | 62 |
中文關鍵詞: | 三維人臉辨識 、三維人臉重建 |
外文關鍵詞: | Face Recognition Across Poses, 3D Face Reconstruction |
相關次數: | 點閱:229 下載:3 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
有別於一般以重建人臉三維模型為基礎的多角度人臉辨識研究,本研究藉由引入單一人臉立體模型,根據單一正面註冊影像即可重建出對應的人臉三維模型。相較需要上百幅雷射掃描人臉的3DMorphable Models 、需要多張不同光源條件影像的Illumination Cone或需要精確人臉對稱資訊的Symmetric Shape From Shading方法,本研究在實踐成本、資料量及實際應用上具有優勢。
本研究首先實踐 Kemelmacher-Shlizerman和Basri在 2011年發表的三維人臉重建方法,其方法基於人臉為朗伯表面的假設,以人臉輻射度模型為主軸,根據單一二維影像變型參考模型,得到對應的人臉三維模型。本研究接著將重建模型進行空間座標轉換及平面投影,建立人臉各種角度樣本資料庫,提升系統對角度變化的容忍度,最後擷取LBP特徵搭配SVM分類器進行多角度人臉辨識。
本研究以FRGC資料庫建立參考模型,透過在PIE資料庫的實驗數據,顯示本研究以單一立體參考模型及單一二維影像需求的方法可以達到基本的辨識效能,並且呈現引用多個立體參考模型對大角度辨識率提升的可能性。
Given a frontal facial image as a gallery sample, a scheme is developed to generate novel views of the face for recognition across poses. The core part of the scheme is a recently published 3D face reconstruction which exploits a single reference 3D face model to build a 3D shape model for each face in the gallery set. The 3D shape model combined with the texture of each facial image in the gallery allows novel poses of the face to be generated. The LBP features are then extracted from these generated poses to train an SVM classifier for recognition.
Assuming Lambertian surface with a reflectance function approximated by spherical harmonics, the 3D reference model would be made to deform so that the 2D projection of the deformed model can approximate the facial image in the gallery. The problem is cast as an image irradiance equation with unknown lighting, albedo, and surface normals. Using the reference model to estimate lighting, and providing an initial estimate of albedo, the reflectance function becomes only a function of the unknown surface normals, and the irradiance equation becomes a partial differential equation which is then solved for depth.
A 3D face from the FRGC database is used as the reference model in the experiments, and the performance is evaluated on the PIE database. It is shown that the developed scheme gives a satisfactory performance, and can be further improved if the alignment between the reference model and the gallery image can be enhanced.
[1] D. Beymer and T. Poggio, “Face recognition from one example view,” in Proc.International Conf. Computer Vision, pp. 500–507, 1995.
[2] T. Cootes, G. Wheeler, K. Walker, and C. Taylor, “View-based active appearance models,” Image and Vision Computing, vol. 20, pp. 657–664, Aug. 2002.
[3] D.Gonzlez-Jimnez and J.L.Alba-Castro, “Toward pose-invariant 2-D face recognition through point distribution models and facial symmetry,” IEEE Trans.
Information Forensics and Security, vol. 2, pp. 413–429, Sep. 2007.
[4] W. Y. Zhao and R. Chellappa, “Illumination-insensitive face recognition using symmetric shape-from-shading,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 286–293, 2000.
[5] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few to many:Illumination cone models for face recognition under variable lighting and pose,”
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, pp. 643–660,June 2011.
[6] V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, pp. 1063–1074, Sep. 2003.
[7] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, “A 3D face model for pose and illumination invariant face recognition,” in Proc. IEEE Conf. Advanced Video and Signal Based Surveillance, pp. 296–301, 2009.
[8] I. Kemelmacher-Shlizerman and R. Basri, “3D face reconstruction from a single image using a single reference face shape,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, pp. 394–405, Feb. 2011.
[9] X. Zhang and Y. Gao, “Face recognition across pose: A review,” Pattern Recognition, vol. 42, pp. 2876–2896, Nov. 2009.
[10] C. D. Castillo and D. W. J. and, “Using stereo matching for 2-D face recognition across pose,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition,
pp. 1–8, 2007.
[11] R. Gross, I. Matthews, and S. Baker, “Appearance-based face recognition and
light fields,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, pp. 449–465, 2004.
[12] S. J. Prince, J. H. Elder, J. Warrell, and F. M. Felisberti, “Tied factor analysis for face recognition across large pose differences,” IEEE Trans. Pattern Analysis
and Machine Intelligence, vol. 30, pp. 970–984, June 2008.
[13] D. Jiang, Y. Hu, S. Yan, L. Zhang, H. Zhang, and W. Gao, “Efficient 3D reconstruction for facerecognition,” Pattern Recognition, vol. 38, pp. 787–798,
June 2005.
[14] U. Prabhu, J. Heo, and M. Savvides, “Unconstrained pose-invariant face recognition using 3D generic elastic models,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 33, pp. 1952–1961, Oct. 2011.
[15] X. Zhang and Y. Gao, “Heterogeneous specular and diffuse 3-D surface approximation for face recognition across pose,” IEEE Trans. Information Forensics
and Security, vol. 7, pp. 1952–1961, Apr. 2012.
[16] P.-H. Lee, G.-S. Hsu, Y.-W. Wang, and Y.-P. Hung, “Subject-specific and pose-oriented facial features for face recognition across poses,” IEEE Trans. Systems,
Man, and Cybernetics, Part B: Cybernetics, vol. PP, pp. 1–12, 2012.
[17] A. Li, S. Shan, X. Chen, and W. Gao, “Cross-pose face recognition based on partial least squares,” Pattern Recognition Letters, vol. 32, pp. 1948–1955, Nov.
2011.
[18] X. He, S. Yuk, K. Chow, K.-Y. K. Wong, and R. H. Y. Chung, “Automatic 3D face texture mapping framework from single image,” in Proc. International Conf. Internet Multimedia Computing and Service, pp. 123–128, 2009.
[19] S. R. Marschner, S. H. Westin, E. P. F. Lafortune, K. E. Torrance, and D. P. Greenberg, “Image-based BRDF measurement including human skin,” in Proc.
10th Eurographics Workshop Rendering, pp. 139–152, 1999.
[20] R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, pp. 218–223,
Feb. 2003.
[21] R. Ramamoorthi, “Analytic PCA construction for theoritical analysis of lighting variability in images of a lambertian object,” IEEE Trans. Pattern Analysis
and Machine Intelligence, vol. 24, pp. 1322–1333, Oct. 2002.
[22] P. Phillips, P. J.Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition,
vol. 1, pp. 947–954, 2005.
[23] “Point cloud library.” http://pointclouds.org.
[24] “freeglut.” http://freeglut.sourceforge.net.
[25] T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression
(PIE) database,” in Proc. IEEE Conf. Automatic Face and Gesture Recognition, pp. 46–51, 2002.
[26] “Opencv.” http://opencv.org.
[27] “Clapack.” http://www.netlib.org/clapack.
[28] B. Heisele, P. Ho, J. Wu, and T. Poggio, “Face recognition: component-based versus global approaches,” Computer Vision and Image Understanding, vol. 91,
pp. 6–12, July 2003.
[29] C. D. Castillo and D. W. Jacobs, “Wide-baseline stereo for face recognition with large pose variation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 537–544, 2011.
[30] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, “The FERET database and evaluation procedure for face recognition algorithms,” Image and Vision Computing, vol. 16, pp. 295–306, Apr. 1998.
[31] “The ORL face database.” http://www.cl.cam.ac.uk/research/dtg/attarchive/
facedatabase.html.