簡易檢索 / 詳目顯示

研究生: 蔡秉宸
Ping-Chen Tsai
論文名稱: 塗改最敏感的像數點使人臉辨識出錯
Alter the most sensitive image points to make face recognition wrong
指導教授: 洪西進
Shi-Jinn Horng
口試委員: 林灶生
LIN ZAO SHENG
沈上翔
SHEN CHEN SHANG
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 39
中文關鍵詞: 人臉辨識對抗例深度學習人工智慧
外文關鍵詞: Face Recognition, Adversarial Example, Deep Learning, Artificial Intelligence
相關次數: 點閱:309下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 深度學習隨著大數據(Big Data)和CNN(Convolutional Neural Networks)等技術的發展近幾年有著飛躍性的成長,現今有許多技術都有深度學習參與其中,如:人臉辨識、物件辨識、圖片分類等等。儘管深度學習有非常多的成功案例,但卻無法抵禦對抗例(Adversarial Example)的攻擊。對抗例是一種攻擊類型的輸入資料,通過對輸入圖片進行些微的修改,人類可以察覺異常但還是可以分辨圖片的內容,但卻會使神經網路分辨錯誤;有些人類都無法察覺到有任何異常,卻依舊能騙過神經網路。對抗例的提出使得深度學習架構出現了安全的疑慮,即使在不知道神經網路的架構下也有可能輸入假的資料使其判斷錯誤。截至目前為止並沒有一種防禦方法能夠完全抵擋其攻擊,有研究指出即便將對抗例圖片影印出來,透過現實的資料輸入端,如手機照相機、攝影機等,將影印的對抗例圖片當作輸入,依舊能使神經網路辨識錯誤。本論文旨在進行對抗例對現實的資料輸入端直接攻擊,並研究其攻擊成效。利用現今流行的人臉辨識網路作為攻擊目標,透過修改一個像素的方式模擬照相機鏡頭破損之情形來建立天然的對抗例並以此當作神經網路的資料輸入,藉此達到攻擊的目的。實驗結果顯示錯誤拒絕率為9.93%,錯誤接受率為5%;為印證實驗結果,展示在人臉辨識的輸入端使用現實中的攝影機,透過汙損的方式也能使神經網路辨識錯誤。


    With the development of Big Data and CNN (Convolutional Neural Networks) and other technologies, deep learning techniques have grown up rapidly in recent years. Today, there are many technologies that involve deep learning, such as face recognition, object recognition, and picture classification and so on. Although deep learning techniques have been applied successfully in many cases, it cannot resist the intrusion of adversarial examples. The adversarial example is a type of input data. By slightly modifying a picture of input data, the content of the picture is still recognizable by humans, but it will make the neural network do wrong classification. At the same time, some adversarial examples cannot even make people aware of any anomalies, but they can still deceive the neural network. The adversarial examples make the deep learning architecture have security concerns. Even if the architecture of the neural network is not known, it is possible to input false data to let the system make a wrong decision. So far, there is no defense method that can completely resist this kind of attacks. Some studies have pointed out that even if the photos of the adversarial examples are printed out, these adversarial examples captured by a camera as input can still let the system make wrong decision. The purpose of this study is to manipulate the adversarial example attack directly on the actual data input terminal, and study the effectiveness of the attack. Using the popular face recognition network as the attack target, a natural adversarial example is created by simulating the damage of the camera lens by modifying one pixel and used it as the input data to the neural network to achieve the purpose of the attack. The experiments show that there are 9.93% false rejection rate and 5% false acceptance rate. For verification, by simulating a real camera with a defacement, the neural network actually makes recognize errors.

    目錄 第一章 緒論 6 1.1 研究動機與目的 6 1.2 相關研究: 6 第二章 系統架構與相關硬體規格 8 2.1 系統架構 8 2.2 相關硬體規格 : 8 第三章 深度學習介紹與人臉辨識介紹 9 3.1深度學習網路介紹 9 3.2人臉辨識原理介紹 10 第四章 對抗例研究介紹 15 4.1 對抗例理論介紹: 15 4.2對抗例攻擊介紹: 16 4.3對抗例防禦介紹: 19 第五章 研究介紹與成果展示 21 5.1研究流程說明 21 5.2人臉辨識網路的介紹 21 5.3研究方法介紹 22 5.4研究結果展示 29 第六章 結論 34 6.1 研究成果 34 6.2 未來展望 34 Reference 35 附錄 36

    Reference
    [1] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus, “Intriguing properties of neural networks,” ICLR, abs/1312.6199, 2014.
    [2] Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, “Adversarial examples in the physical world,” Workshop track - ICLR 2017.
    [3] O. M. Parkhi, A. Vedaldi, A. Zisserman, “Deep face recognition,” Proceedings of the British Machine Vision, vol. 1, no. 3, pp. 1-10, 2015.
    [4] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” NIPS'12: 25th International Conference on Neural Information Processing Systems, Vol. 1, Dec. 2012, pp. 1097-1105.
    [5] Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A unified embedding for face recognition and clustering,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 815-823.
    [6] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy, “Explaining and harnessing adversarial examples,” 14 International Conference on Learning Representations (ICLR), 2015.
    [7] Jiawei Su, Danilo Vasconcellos Vargas, Kouichi Sakurai, “One pixel attack for fooling deep neural networks,” IEEE Transactions on Evolutionary Computation, Vol. 23 , Issue 5, Oct. 2019, pp. 828-841.
    [8] Weilin Xu, David Evans, Yanjun Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks, Network and Distributed Systems Security Symposium (NDSS), 2018, pp.1-15.
    [9] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint 1706.06083, 2017
    [10] Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song, “Adversarial example defenses: Ensembles of weak defenses are not strong,” 11th USENIX Workshop on Offensive Technologies (WOOT 17), 2017.
    [11] Hinton, Geoffrey E and Osindero, Simon and Teh, Yee-Whye, “A fast learning algorithm for deep belief nets,” MIT Press, 2006.
    [12] Hubel, David H and Wiesel, Torsten N, “Receptive fields, binocular interaction and functional architecture in the cat's visual cortex,” Wiley Online Library, 1962.
    [13] LeCun, Yann, et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, Vol. 86, No. 11, Nov. 1998, pp. 2278-2324.
    [14] Mei Wang, Weihong Deng, “Deep face recognition: A survey,” Computer Vision and Pattern Recognition, 18, Apr 2018.
    [15] A. Hasnat, J. Bohne, J. Milgram, S. Gentric, and L. Chen. Deepvisage, “Making face recognition simple yet with powerful generalization skills,” arXiv preprint arXiv:1703.08388, 2017.
    [16] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami, “The limitations of deep learning in adversarial settings,” IEEE European Symposium on Security and Privacy, 2016.
    [17] Shixiang Gu and Luca Rigazio, “Towards deep neural network architectures robust to adversarial examples,” arXiv preprint 1412.5068, 2014.

    無法下載圖示 全文公開日期 2025/07/31 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE