簡易檢索 / 詳目顯示

研究生: 金鈞莆
Chun-Pu Chin
論文名稱: 使用差分進化演算法產生最佳對抗補丁攻擊人臉識別系統
Using Differential Evolution Algorithms to Generate the Best Adversarial Patch Attack to Face Recognition Systems
指導教授: 沈上翔
Shang-Hsiang Shen
口試委員: 吳怡樂
I-Le Wu
楊朝棟
Chao-Tung Yang
林韋宏
Wei-Hung Lin
楊昌彪
Chang-Piao Yang
沈上翔
Shang-Hsiang Shen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 37
中文關鍵詞: 深度學習黑盒攻擊對抗補丁人臉識別系統差分進化演算法
外文關鍵詞: Deep Learning, Black Box Attack, Adversarial Patch, Face Recognition Systems, Differential Evolution Algorithm
相關次數: 點閱:237下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著深度學習的發展與創新,越來越多的領域開始應用該技術,例如圖像辨識、圖像分類、臉部辨識等等。隨之而來深度學習的問題也陸續嶄露在大眾的視野中,例如臉部辨識系統通常是應用在安全保護的領域中,其中手機的臉部解鎖功能可以有效地防止他人使用自己的手機,但若是臉部辨識系統存在著某些漏洞,那就等於這道保護是沒有用的,無論什麼人都可以入侵系統。所謂的漏洞就是對抗樣本(Adversarial example)的生成,對抗樣本是將原始輸入圖像加上一個微小的擾動後所生成出來的,對於人來說根本不會發現原始圖像與對抗樣本的差異,但是對於神經網路來說兩者卻是天差地遠。如在臉部辨識時,測試者原本是A使用者,因為加上了擾動之後系統誤判成B使用者,這是一個非常致命的問題。本篇論文就是在尋找臉部辨識系統中最敏感的點,該點可以讓人臉辨識系統誤判,本篇論文有兩種攻擊方式使系統辨識錯誤,其一為錯誤接受(False Acceptance),錯誤接受為受試者不在臉部辨識系統的人臉資料庫中,但是卻可以黏貼利用本論文的方法所找到的對抗補丁從而使辨識系統「接受」受試者的身分,其成功率達到50 %;其二為錯誤拒絕(False Rejection),錯誤拒絕為受試者是在臉部辨識系統的人臉資料庫中,但是因為黏貼上利用本論文的方法所找到的對抗補丁而讓辨識系統「不接受」受試者的身分,其成功率達到30 %。


    With the development and innovation of deep learning, more and more fields have begun to apply this technology, such as image recognition, image classification, face recognition, etc. Subsequently, the problem of deep learning has gradually emerged in the public's field of vision. For example, the face recognition system is usually used in the field of security protection, in which the face unlock function of the mobile phone can effectively prevent others from using their mobile phones. Suppose there are some loopholes in the facial recognition system, it means that the protection is useless, and anyone can invade the system. The loophole mentioned here is the generation of Adversarial examples. An adversarial example is generated by adding a slight disturbance to the original input image. For humans, the difference between the original image and the adversarial example may not be noticed, but for the neural network, they are very different. For example, in a face recognition system, the tester is user A; after adding the disturbance, it is recognized as user B. This result is a very fatal problem. This paper is to find the most sensitive point in the face recognition system, which can make the face recognition system recognize errors. This paper has two experimental methods to make the system recognize errors. One is false acceptance. The false acceptance is that the subject is not in the face database of the face recognition system, but he/she can make the recognition system "recognize" the subject's identity by pasting the adversarial patch found in this paper. Its success rate reaches 50% ; the second is false rejection. The false rejection is that the subject is in the face database of the face recognition system, but he/she will make the recognition system "not recognize" the subject's identity by pasting the adversarial patch found in this paper. Its success rate reaches 30%.

    摘要 2 ABSTRACT 3 致謝 4 目錄 5 圖目錄 6 表目錄 6 第一章緒論 7 1.1研究動機與目的 7 1.2相關研究 8 第二章系統架構及硬體規格 9 2.1系統架構 9 2.2 相關硬體規格 9 第三章深度學習介紹與人臉辨識介紹 10 3.1深度學習網路介紹 10 3.2卷積神經網路介紹 10 3.3臉部辨識介紹 12 3.4臉部辨識系統 13 第四章對抗攻擊介紹 19 4.1對抗樣本 19 4.2對抗攻擊的目的 20 4.3對抗攻擊所須資訊 22 第五章研究介紹和成果展示 23 5.1差分演算法介紹 23 5.2差分演算法應用 24 5.3研究流程說明 26 5.4 實驗結果 27 第六章 結論 33 6.1 研究成果 33 6.2未來展望 33 參考文獻 34

    [1] J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” IEEE Trans. Evol. Computat., vol. 23, no. 5, pp. 828–841, Oct. 2019, doi: 10.1109/TEVC.2019.2890858.
    [2] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples.” arXiv, Mar. 20, 2015. doi: 10.48550/arXiv.1412.6572.
    [3] C. Szegedy et al., “Intriguing properties of neural networks.” arXiv, Feb. 19, 2014. doi: 10.48550/arXiv.1312.6199.
    [4] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: a simple and accurate method to fool deep neural networks.” arXiv, Jul. 04, 2016. doi: 10.48550/arXiv.1511.04599.
    [5] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.” arXiv, Mar. 14, 2016. doi: 10.48550/arXiv.1511.04508.
    [6] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models,” arXiv, arXiv:1805.06605, May 2018. doi: 10.48550/arXiv.1805.06605.
    [7] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 815–823. doi: 10.1109/CVPR.2015.7298682.
    [8] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, Oct. 2016, doi: 10.1109/LSP.2016.2603342.
    [9] C. Szegedy et al., “Going Deeper with Convolutions,” arXiv, arXiv:1409.4842, Sep. 2014. doi: 10.48550/arXiv.1409.4842.
    [10] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 2818–2826. doi: 10.1109/CVPR.2016.308.
    [11] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” arXiv, arXiv:1602.07261, Aug. 2016. doi: 10.48550/arXiv.1602.07261.
    [12] R. Storn and K. Price, “Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, Dec. 1997, doi: 10.1023/A:1008202821328.
    [13] A. Shafahi, M. Najibi, Z. Xu, J. Dickerson, L. S. Davis, and T. Goldstein, “Universal Adversarial Training.” arXiv, Nov. 20, 2019. doi: 10.48550/arXiv.1811.11304.
    [14] A. Rozsa, E. M. Rudd, and T. E. Boult, “Adversarial Diversity and Hard Positive Generation.” arXiv, May 16, 2016. doi: 10.48550/arXiv.1605.01775.
    [15] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world.” arXiv, Feb. 10, 2017. doi: 10.48550/arXiv.1607.02533.
    [16] W. Xu, D. Evans, and Y. Qi, “Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks,” 2018. doi: 10.14722/ndss.2018.23198.
    [17] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks.” arXiv, Sep. 04, 2019. doi: 10.48550/arXiv.1706.06083.
    [18] W. He, J. Wei, X. Chen, N. Carlini, and D. Song, “Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong.” arXiv, Jun. 14, 2017. doi: 10.48550/arXiv.1706.04701.
    [19] K. Eykholt et al., “Robust Physical-World Attacks on Deep Learning Models.” arXiv, Apr. 10, 2018. doi: 10.48550/arXiv.1707.08945.

    無法下載圖示 全文公開日期 2025/08/31 (校內網路)
    全文公開日期 2025/08/31 (校外網路)
    全文公開日期 2025/08/31 (國家圖書館:臺灣博碩士論文系統)
    QR CODE