簡易檢索 / 詳目顯示

研究生: 梁子祥
Te-Hsiang Liang
論文名稱: 以人臉辨識建立身分驗證機制
Implementation of the Identity Verification Mechanism Based on Face Recognition
指導教授: 徐勝均
Sheng-Dong Xu
口試委員: 蔡明忠
Ming-Jong Tsai
陳金聖
Chin-Sheng Chen
林紀穎
Chi-Ying Lin
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 78
中文關鍵詞: Haar-likeAdaBoostKAZE 特徵演算法人臉辨識身分驗證。
外文關鍵詞: Haar-like, AdaBoost, KAZE feature, face recognition, identity verification .
相關次數: 點閱:830下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文探討以Haar-like AdaBoost進行人臉偵測,縮小人臉辨識範圍,再透過KAZE 特徵值演算法進行身分驗證的人臉辨識。KAZE 特徵演算法最早於2012年提出,其中KAZE特徵檢測是在圖像域中進行非線性擴散處理。本研究為改變傳統的人臉辨識模式(如SIFT、SURF),採用新的演算法進行處理,並且嘗試分析常見的身分驗證問題:(a)與本人其他照片間的相似度,(b)戴眼鏡前後的相似度,(c)與其他同性別人物照片的相似度,(d)與其他異性別人物照片的相似度。由模擬結果顯示,上述四種問題的相似度分別可達:(a)90%,(b)92%,(c)60%,(d)67%。也將本論文所提出的方法模擬應用於:(a)家庭式門禁系統,(b)身分驗證機制。在身分驗證機制的模擬上,共使用200組照片進行比對驗證,透過影像比對取得各別的相似度數值,再以此判斷驗證是否為本人。模擬結果顯示準確率可達90%以上。


    This thesis discusses the face recognition with the application of identity verification. In order to narrow down the range in an image to find the face, the Haar-like AdaBoost is used for the face detection. Then KAZE algorithm is first applied here for the feature extraction in the face recognition. KAZE feature algorithm was first proposed in 2012, and the KAZE features can be described and detected in a nonlinear scale space by means of nonlinear diffusion filtering. In this research, we adopt new algorithm KAZE, instead of using the traditional methods, e.g., SIFT (Scale-Invariant Feature Transform) and SURF (Speeded Up Robust Features) to process the face detection. Furthermore, based on the used method, we try to analyze the identity verification problems, including (a) the similarity between one person's photo and his other photos, (b) the similarity between one person with and without glasses, (c) the similarity between one person and the other ones with the same gender, and (d) the similarity between one person and the other ones with the different gender. Simulation results indicate that the above mentioned similarities can reach (a) 90%, (b) 92%, (c) 60%, and (d) 67%. We also apply the proposed method to (a) the home access control system, and (b) the identity verification mechanism. In the simulation of the latter application, we use 200 photos for comparison to get the different similarity values to judge if the identity is correct. Simulation results show that the accuracy can reach more than 90%.

    中文摘要 Abstract 致謝 目錄 圖目錄 表目錄 第1章導論 1.1研究背景 1.2文獻綜覽 1.2.1 主要元素分析 1.2.2 德勞內三角化 1.2.3 尺度不變特徵轉換 1.2.4 加速穩健特徵 1.3論文架構 第2章Haar-like AdaBoost 2.1 Haar分類器 2.2 Haar-like特徵 2.3 積分圖 2.4AdaBoost 2.5級聯 第3章KAZE特徵演算法 3.1 簡介 3.2 非線性擴散濾波 3.2.1 Perona-Malik擴散方程 3.2.2 AOS演算法 3.3 KAZE特徵檢測與描述 3.3.1 非線性尺度空間的建構 3.3.2 特徵點檢測 3.3.3 特徵描述向量 3.4 KAZE與SIFT的比較 第4章實驗結果與討論 4.1 KAZE特徵演算法模擬 4.1.1 實驗目的 4.1.2 實驗設置 4.1.2.1 實驗素材 4.1.2.2 使用Haar-like AdaBoost偵測人臉位置 4.1.2.3 擷取人臉圖片 4.1.2.4 使用KAZE 特徵演算法取得特徵值 4.1.3 模擬結果與討論 4.1.3.1 影像解析度對於Haar-like AdaBoost偵測結果的影響 4.1.3.2 影像解析度對於KAZE特徵擷取的影響 4.1.3.3 參數設定對於模擬比對的影響 4.2 家庭式門禁系統模擬 4.2.1 模擬設置 4.2.2 模擬結果與討論 4.3 身分驗證機制數據模擬 4.3.1 模擬設置 4.3.2 第二次模擬 4.3.2 模擬結果與討論 第5章結論與未來研究方向 5.1 結論 5.2 未來研究方向 參考文獻

    [1]P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.
    [2]K. Delac, M. Grgic, and P. Liatsis, “Appearance-based statistical methods for face recognition,” International Symposium ELMAR-2005, Zadar, Croatia, 2005.
    [3]X. Yang, J. Cheng, W. Feng, H. Liang, Z. Bai, and D. Tao, “Cauchy estimator discriminant analysis for face recognition,” Neurocomputing, vol. 199, pp. 144-153, 2016.
    [4]J. W. Wang, N. T. Le, J. S. Lee, and C. C. Wang, “Color face image enhancement using adaptive singular value decomposition in fourier domain for face recognition,” Pattern Recognition, vol. 57, pp. 31-49, 2016.
    [5]F. Cao, H. Hu, J. Lu, J. Zhao, Z. Zhou, and J. Wu, “Pose and illumination variable face recognition via sparse representation and illumination dictionary,” Department of Applied Mathematics, College of Sciences, China Jiliang University, Hangzhou, China, vol. 107, pp. 117-128, 2016.
    [6]K. Delac, M. Grgic, and P. Liatsis, “Appearance-based statistical methods for face recognition,” International Symposium ELMAR-2005 focused on Multimedia Systems and Applications, Zadar, Croatia, Jun. 08-10, 2005, pp. 151-158.
    [7]張家豪,「以 AAM與 PCA為基礎之眼鏡特徵弱化方法於人臉辨識之改進」,中央大學資訊工程學系碩士論文,2008。
    [8]李易俊,「基於 Gabor特徵及二維 PCA之人臉辨識」,成功大學電腦與通信工程研究所碩士論文,2005。
    [9]C. Li, J. Liu, A. Wang, and K. Li, “Matrix reduction based on generalized PCA method in face recognition,” IEEE Conference on Digital Home, Guangzhou, China, Nov. 28-30, 2014, pp. 35-38.
    [10]G. N. Girish and P. K. Das, “Face recognition using MB-LBP and PCA: A comparative study,” IEEE Conference on Computer Communication and Informatics, Coimbatore, India, Jan. 3-5, 2014, pp. 1-6.
    [11]D. Wang, D. Li, and Y. Lin, “A new method of face recognition with data field and PCA,” IEEE Conference on Granular Computing, Beijing, China, Dec. 13-15, 2013, pp. 320-325.
    [12]R. Akbari and S. Mozaffari, “Performance enhancement of PCA-based face recognition system via gender classification method,” IEEE, Conference on Machine Vision and Image Processing. Isfahan, Iran, Oct. 27-28, 2010, pp. 1-6.
    [13]P. Kamencay, M. Breznan, D. Jelsovka, and M. Zachariasova, “Improved face recognition method based on segmentation algorithm using SIFT-PCA,” IEEE Conference on Telecommunications and Signal Processing, Prague, Czech Republic, Jul. 3-4, 2012, pp. 758-762.
    [14]R. Sharma and M. S. Patterh, “A new pose invariant face recognition system using PCA and ANFIS,” Optik-International Journal for Light and Electron Optics, vol. 126, no. 23, pp. 3483-3487, 2015.
    [15]C. Zhou, L. Wang, Q. Zhang, and X. Wei, “Face recognition based on PCA and logistic regression analysis,” Optik-International Journal for Light and Electron Optics, vol. 125, no. 20, pp. 5916-5919, 2014.
    [16]C. Zhou, L. Wang, Q. Zhang, and X. Wei, “Face recognition based on PCA image reconstruction and LDA,” Optik-International Journal for Light and Electron Optics, vol. 124, no. 22, pp. 5599-5603, 2013.
    [17]B. Delaunay, “Sur la sphère vide,” Izv. Akad. Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk, vol. 7, pp. 793-800, 1934.
    [18]D. G. Lowe, “Object recognition from local scale-invariant features,” IEEE International Conference on Computer Vision, vol. 2, Kerkyra, Greece, Sep. 20-27, 1999, pp. 1150-1157.
    [19]Y. Gao and H. J. Lee, “Pose unconstrained face recognition based on SIFT and alignment error,” IEEE Conference on Audio, Language and Image Processing, Shanghai, China, Jul. 7-9, 2014, pp. 277-281.
    [20]J. G. Wang, J. Li, C. Y. Lee, and W. Y. Yau, “Dense SIFT and gabor descriptors-based face representation with applications to gender recognition,” IEEE Conference on Control Automation Robotics & Vision, Singapore, Dec. 7-10, 2010, pp. 1860-1864.
    [21]J. G. Wang, J. Li, W. Y. Yau, and E. Sung, “Boosting dense SIFT descriptors and shape contexts of face images for gender recognition,” IEEE Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, USA, Jun. 13-18, 2010, pp. 96-102.
    [22]C. Geng and X. Jiang, “SIFT features for face recognition,” IEEE Conference on Computer Science and Information Technology, Beijing, China, Aug. 8-11, 2009, pp.598-602
    [23]L. Lenc and P. Král, “Automatic face recognition system based on the SIFT features,” Computers & Electrical Engineering, vol. 46, pp. 256-272, 2015.
    [24]A. Vinay, D. Hebbar, V. S. Shekhar, K. B. Murthy, and S. Natarajan, “Two novel detector-descriptor based approaches for face recognition using SIFT and SURF,” Procedia Computer Science, vol. 70, pp. 185-197, 2015.
    [25]P. M. Panchal, S. R. Panchal, and S. K. Shah, “A comparison of SIFT and SURF,” International Journal of Innovative Research in Computer and Communication Engineering, vol. 1, no. 2, pp. 323-327, 2013.
    [26]P. F. Alcantarilla, L. M. Bergasa, and A. J. Davison, “Gauge-SURF descriptors,” Image and Vision Computing, vol. 31, no. 1, pp. 103-116, 2013.
    [27]H. J. Bouchech, S. Foufou, and M. Abidi, “Strengthening SURF descriptor with discriminant image filter learning: application to face recognition,” IEEE Conference on Microelectronics, Doha, Qatar, Dec. 14-17, 2014, pp. 136-139.
    [28]S. An, X. Ma, R. Song, and Y. Li, “Face detection and recognition with SURF for human-robot interaction,” IEEE Conference on Automation and Logistics, Shenyang, China, Aug. 5-7, 2009, pp. 1946-1951.
    [29]E. Li, L. Yang, B. Wang, J. Li, and Y. T. Peng, “SURF cascade face detection acceleration on Sandy Bridge processor,” IEEE Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, Jun. 16-21, 2012, pp.41-47.
    [30]S. Cao, “A fast SURF way for human face recognition with cell similarity,” IEEE Conference on Industrial Electronics and Applications, Beijing, China, Jun. 21-23, 2011, pp. 166-169.
    [31]K. Peason, “On lines and planes of closest fit to systems of point in space,” Philosophical Magazine vol. 2, no. 11, pp. 559-572, 1901.
    [32]T. M. Thanh, P. T. Hiep, T. M. Tam, and K. Tanaka, “Robust semi-blind video watermarking based on frame-patch matching,” AEU-International Journal of Electronics and Communications vol. 68, no. 10, pp. 1007-1015, 2014.
    [33]Z. Lin, J. Rhee, X. Zhang, D. Xu, and X. Jiang, “SigGraph: brute force scanning of kernel data structure instances using graph-based signatures,” The Network and Distributed System Symposium, Network & Distributed System Security Symposium, vol. 3, no. 3, 2011.
    [34]Z. Xu, Y. Liu, S. Du, P. Wu, and J. Li, “DFOB: Detecting and describing features by octagon filter bank for fast image matching,” Signal Processing: Image Communication vol. 41, pp. 61-71, 2016.
    [35]H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.
    [36]Y. Lei, X. Jiang, Z. Shi, D. Chen, and Q. Li, “Face recognition method based on SURF feature,” IEEE Symposium on Computer Network and Multimedia Technology, Wuhan, China, Jan. 18-20, 2009, pp. 1-4.
    [37]H. Li, T. Xu, J. Li, and L. Zhang, “Face recognition based on improved SURF,” IEEE Conference on Intelligent System Design and Engineering Applications, Hong Kong, China, Jan. 16-18, 2013, pp. 755-758.
    [38]B. N. Kang, J. Yoon, H. Park, and D. Kim, “Face recognition using affine dense SURF-like descriptors,” IEEE Conference on Consumer Electronics, Las Vegas, NV, USA, Jan. 10-13, 2014, pp. 120-130.
    [39]A. Vinay, V. Vasuki, S. Bhat, K. S. Jayanth, K. B. Murthy, and S. Natarajan, “Two dimensionality reduction techniques for SURF based face recognition,” Procedia Computer Science, vol. 85, pp. 241-248, 2016.
    [40]P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” IEEE Conference on Computer Society Computer Vision and Pattern Recognition, vol. 1, Kauai, HI, USA, Dec. 8-14, 2001, pp. I-511-I-518.
    [41]R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” IEEE, Conference on Image Processing, vol. 1, Rochester, NY, USA, 2002, pp. I-900-I-903.
    [42]K. Nasrollahi and T. B. Moeslund, “Haar-like features for robust real-time face recognition,” IEEE Conference on Image Processing, Melbourne, VIC, Australia, Sep. 15-18, 2013, pp. 3073-3077.
    [43]J. Zhu and Z. Chen, “Real time face detection system using AdaBoost and Haar-like features,” IEEE Conference on Information Science and Control Engineering, Shanghai, China, Apr. 24-26, 2015, pp. 404-407.
    [44]T. Mita, T. Kaneko, and O. Hori, “Joint Haar-like features for face detection,” IEEE Conference on Computer Vision (ICCV'05) Volume 1, Vol. 2, Beijing, China, Oct. 17-21, 2005, pp.1619-1626.
    [45]T. T. Do, K. N. Doan, T. H. Le, and B. H. Le, “Boosted of Haar-like features and local binary pattern based face detection,” IEEE Conference on Computing and Communication Technologies, Da Nang, Vietnam, Jul. 13-17, 2009, pp. 1-8.
    [46]張循鋰,「以特徵熵為基礎之 ADABOOST演算法應用於人臉偵測之研究」,大同大學資訊工程學系所博士論文,2011。
    [47]Y. Zhao, L. Gong, B. Zhou, Y. Huang, and C. Liu, “Detecting tomatoes in greenhouse scenes by combining AdaBoost classifier and color analysis,” School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai , China, vol. 148, pp. 127-137, 2016.
    [48]C. H. Du, Z. H. U. Hong, L. M. Luo, L. I. U. Jie, and X. Y. Huang, “Face detection in video based on AdaBoost algorithm and skin model,” The Journal of China Universities of Posts and Telecommunications, vol. 20, no. 1, pp. 6, pp. 24-9, 2013.
    [49]M. Yang, J. Crenshaw, B. Augustine, R. Mareachen, and Y. Wu, “AdaBoost-based face detection for embedded systems,” Computer Vision and Image Understanding vol. 114, no. 11, pp. 1116-1125, 2010.
    [50]P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” European Conference on Computer Vision, vol. 7557, Florence, Italy, Oct. 7-13, 2012, pp. 214-227.
    [51]Q. Zhu and Z. Lei, “KAZE Algorithm applied in augmented reality,” School of Software Engineering, Beijing University of Technology, 2015.
    [52]Y. Liu, C. Lan, C. Li, F. Mo, and H. Wang, “S-AKAZE: An effective point-based method for image matching,” Optik-International Journal for Light and Electron Optics, vol. 127, no. 14, pp. 5670-5681, 2016.
    [53]K. L. Prasad, T. C. M. Rao, and V. Kannan, “A novel semi-blind video watermarking using KAZE-PCA-2D Haar DWT scheme,” IEEE Conference on Computational Intelligence and Computing Research, Madurai, India, Dec. 10-12, 2015, pp. 1-8.
    [54]T. M. Thanh, P. T. Hiep, T. M. Tam, and K. Ryuji, “Frame-patch matching based robust video watermarking using KAZE feature,” IEEE Conference on Multimedia and Expo, San Jose, CA , USA, Jul. 15-19, 2013, pp. 1-6.
    [55]Y. Liu, C. Lan, F. Yao, L. Li, and C. Li, “Oblique remote sensing image matching based on improved AKAZE algorithm,” IEEE Conference on Information Science and Technology, Dalian, China, May 6-8, 2016, pp. 448-454.
    [56]J. Weickert, B. T. H. Romeny, and M. A. Viergever, “Efficient and reliable schemes for nonlinear diffusion filtering,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 398-410, 1998.
    [57]T. S. Yoo, Scale and statistics in variable conductance diffusion, University of North Carolina at Chapel Hill Chapel Hill, NC, USA, 1994.
    [58]D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
    [59]M. Brown and D. G. Lowe, “Invariant features from interest point groups,” Deparment of Computer Science, University of British Columbia, Vancouver, Canada, 2002.
    [60]O. Andersson and S. Reyna Marquez, “A comparison of object detection algorithms using unmanipulated testing images: Comparing SIFT, KAZE, AKAZE and ORB,” Student thesis, Computer Science and Communication, Kungliga Tekniska högskolan (KTH), Stockholm, Sweden, 2016.
    [61]“Visual Studio,” http://msdn.microsoft.com/
    [62]“OpenCV,” http://opencv.org/
    [63]“Psychological image collection at stirling,” http://pics.psych.stir.ac.uk/

    QR CODE