簡易檢索 / 詳目顯示

研究生: 吳紫源
Tzu-Yuan Wu
論文名稱: 一個利用深度學習於二維圖像防偽檢測之人臉活體識別系統
A Deep-Learning-Based Face Liveness Detection System Against Spoofing Attack Using 2D Image Distortion Analysis
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 李建德
Jiann-Der Lee
謝君偉
Jun-Wei Hsieh
陳怡伶
Yi-Ling Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 55
中文關鍵詞: 翻拍人臉二維圖像防偽檢測局部二值模式圖像失真分析深度神經網路
外文關鍵詞: remaking face images, face liveness detection, local binary pattern, image distortion analysis, deep neural network
相關次數: 點閱:257下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著科技的進步,人臉辨識為一種存取控制應用於身份驗證的新興技術。特別是在行動裝置中越來越受到關注,由於智慧型手機的普及以及行動作業系統中人臉解鎖功能的發佈,人臉辨識逐漸取代指紋辨識,而成為手機的另一種生物特徵認證技術。
    基於大部分視訊鏡頭的成像缺少有關臉部的深度資訊,且相較獲取指紋、掌紋等其它生物特徵,翻拍人臉影像相對容易(例如:紙張列印,螢幕顯示),而且成本也較低,僅需要一張合法身分的照片就可以侵入人臉識別系統,所以人臉的二維圖像防偽檢測將會是資訊安全領域中的一個重要研究課題。
    針對人臉真實圖像及欺騙的特徵差異,本論文使用了局部二值模式和圖像失真分析來提取圖片的紋理訊息,最後再透過深度神經網路來進行分類,用以建立一個二維圖像防偽檢測之人臉活體識別系統;相較於傳統使用影片的方式,本系統只需要採集單張影像就可以識別真實人臉還是欺騙的照片。
    實驗中使用三種人臉偽造資料庫當作交叉測試的對象。本文提出的方法和資料集可以有效地對人臉的真偽進行分類,在資料內部測試準確率高達99.55%,而在資料外部測試準確度也高達95.13%。從實驗結果可以得知,本文所研發的人臉活體識別系統具有較高的準確度和泛用性。


    With the development of science and technology, face recognition is now an important technology for authentication in various access control applications, especially used in mobile devices. Unlocking by face has gradually replaced fingerprint identification in some scenarios, which becomes one of the major biometric authentication technology of mobile phones.
    In a common camera, due to the lack of depth information, it is easy to make fake face images to crack the identification system (e.g., paper printing and screen display) compared with other biological features such as fingerprints and palm prints. Therefore, face liveness detection against spoofing attack using 2D image distortion analysis will be a very important issue in the field of information security.
    By virtue of the different features between real faces and fake faces, this thesis adopts local binary pattern and 2D image distortion analysis to extract texture information of images, which are used for developing our face liveness detection system against spoofing attack to distinguish fake faces from real faces by a deep neural network. The system employs only a single image captured from a common camera to discriminant real faces and fake faces.
    In the experiments, three kinds of face spoofing databases are used as subjects of cross-validation. The methods and dataset made by ourselves presented in this thesis can effectively classify the authenticity of human faces. The accuracy of the inside test reaches 99.55%, while that of the outside test attains 95.13%. The experimental results show that our face liveness detection system has high accuracy and generality.

    中文摘要 Abstract 致謝 Contents List of Figures List of Tables Chapter 1 Introduction 1.1 Overview 1.2 Motivation 1.3 System Description 1.4 Thesis Organization Chapter 2 Related Work 2.1 Texture Based Methods 2.2 Motion Based Methods 2.3 Methods Based On Image Quality Analysis 2.4 Methods Based On Depth Camera Chapter 3 Face Liveness Detection 3.1 Local Binary Pattern 3.2 Image Distortion Analysis 3.2.1 Specular Reflection Features 3.2.2 Sharpness Features 3.2.3 Chromatic Moment Features 3.2.4 Color Diversity Features Chapter 4 Spoofing Attack Classification 4.1 Random Forest 4.2 Deep Neural Network 4.2.1 Fully Connected Layer 4.2.2 Dropout Layer 4.2.3 Activation Function 4.2.4 Our DNN Model 4.3 Convolutional Neural Network 4.3.1 Convolutional layer 4.3.2 Pooling layer 4.3.3 Our CNN Model Chapter 5 Experimental Results and Discussions 5.1 Experimental Setup 5.2 Face Spoof Databases 5.3 Results of Intra-database Spoof Detection 5.4 Results of Cross-database Spoof Detection Chapter 6 Conclusions and Future Works 6.1 Conclusions 6.2 Future Work References

    [1] J. Yang and S. Z. Li, “Face liveness detection with component dependent descriptor,” in Proceedings of the International Joint Conference on Biometrics, Madrid, Spain, pp. 1-6, 2013.
    [2] I. Chingovska, A. Anjos, and S. Marcel, “On the effectiveness of local binary patterns in face anti-spoofing,” in Proceedings of the International Conference of Biometrics Special Interest Group, Darmstadt, Germany, pp. 1-7, 2012.
    [3] T. de Freitas Pereira, A. Anjos, J. M. De Martino, and S. Marcel, “LBP-TOP based countermeasure against face spoofing attacks,” in Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Korea, pp. 121-132, 2012.
    [4] S. Bharadwaj, T. I. Dhamecha, M. Vatsa, and R. Singh, “Computationally efficient face spoofing detection with motion magnification,” in Proceedings the of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, Oregon, pp. 105-110, 2013
    [5] T. de Freitas Pereira, A. Anjos, J. De Martino, and S. Marcel, “Can face anti-spoofing countermeasures work in a real world scenario?” in Proceedings of the International Conference on Biometrics, Madrid, Spain, pp. 1-8, 2013.
    [6] K. Kollreider, H. Fronthaler, M. I. Faraj, and J. Bigun, “Real-time face detection and motion analysis with application in “liveness” assessment,” IEEE Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 548-558, 2007.
    [7] W. Bao, H. Li, N. Li, and W. Jiang, “A liveness detection method for face recognition based on optical flow field,” in Proceedings of the International Conference on Image Analysis and Signal Processing, Taizhou, China, pp. 233-236, 2009. 
    [8] J. Galbally, S. Marcel, and J. Fierrez, “Image quality assessment for fake biometric detection: Application to iris, fingerprint, and face recognition,” IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 710-724, 2014.
    [9] Xudong Sun, “Context based face spoofing detection using active near-infrared images,” in Proceedings of the 23rd International Conference on Pattern Recognition, Cancun, Mexico, pp. 4262-4267, 2016.
    [10] M. Stricker and M. Orengo, “Similarity of color images,” in Proceedings of the SPIE 2420, Storage and Retrieval for Image and Video Databases, San Jose, California, 1995.
    [11] T. Ojala and M. Pietikainen, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
    [12] V. Christlein et al., “The impact of specular highlights on 3D-2D face recognition,” in Proceedings of the SPIE 8712, Biometric and Surveillance Technology for Human and Activity Identification X, Baltimore, Maryland, 2013.
    [13] Q. Yang, S. Wang, and N. Ahuja, “Real-time specular highlight removal using bilateral filtering,” in Proceedings of the European Conference on Computer Vision, Heraklion, Crete, pp. 87-100, 2010.
    [14] S. Tchoulack, J. M. Pierre Langlois and F. Cheriet, “A video stream processor for real-time detection and correction of specular reflections in endoscopic images,” in Proceedings of the 6th International IEEE Northeast Workshop on Circuits and Systems and TAISA Conference, Montreal, Canada, pp. 49-52, 2008.
    [15] J.L. Pech-Pacheco and G. Cristóbal, “Diatom autofocusing in brightfield microscopy: a comparative study,” in Proceedings of the15th International Conference on Pattern Recognition, Barcelona, Spain, pp. 314-317, 2000.
    [16] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5-32, 2001.
    [17] T. M. Oshiro, P. S. Perez, and J. A. Baranauskas, “How many trees in a random forest?” in Proceedings of the 8th International Conference on Machine Learning and Data Mining in Pattern Recognition, Berlin, Germany, pp.154-168, 2012.
    [18] R Genuer, J. M. Poggi, and C. Tuleau-Malot, “Variable selection using random Forests,” Pattern Recognition Letters, vol. 31, no. 14, pp. 2225-2236, 2010.
    [19] L. Breiman, “Bagging predictors,” Machine learning, vol. 24, no. 2, pp. 123-140, 1996.
    [20] L. Breiman, “Out-of-bag estimation,” [Online] Available: https://www.stat.b erkeley.edu/~breiman/OOBestimation.pdf (accessed on June 20, 2019).
    [21] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, “Classification and
    regression trees,” Chapman and Hall, London, United Kingdom, 1984.
    [22] M. Gardner and S. Dorling, “Artificial neural networks (the multilayer perceptron): A review of applications in the atmospheric sciences,” Atmospheric Environment, vol. 32, no. 14-15, pp. 2627-2636, 1998.
    [23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the International Conference on Learning Representations, San Diego, California, pp. 1-14, 2015.
    [24] X. Tan, Y. Li, J. Liu, and L. Jiang, “Face liveness detection from a single image with sparse low rank bilinear discriminative model,” in Proceedings of the European Conference on Computer Vision, Heraklion, Crete, pp. 504-517, 2010.
    [25] I. Chingovska, A. Anjos, and S. Marcel, “On the effectiveness of local binary patterns in face anti-spoofing,” in Proceedings of the International Conference of Biometrics Special Interest Group, Darmstadt, Germany, pp. 1-7, 2012.

    無法下載圖示 全文公開日期 2024/07/20 (校內網路)
    全文公開日期 2024/07/20 (校外網路)
    全文公開日期 2029/07/20 (國家圖書館:臺灣博碩士論文系統)
    QR CODE