簡易檢索 / 詳目顯示

研究生: Joshua C. Manzano
Joshua C. Manzano
論文名稱: Facestamp: Self-Reference Proactive Deepfake Detection using Facial Attribute Deep Watermarking
Facestamp: Self-Reference Proactive Deepfake Detection using Facial Attribute Deep Watermarking
指導教授: 陳怡伶
Yi-Ling Chen
口試委員: 花凱龍
Kai-Lung Hua
陳永耀
Yung-Yao Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 42
外文關鍵詞: deepfakes, steganography, proactive
相關次數: 點閱:210下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • Deepfakes are progressively harder to distinguish and present a growing
    problem to image authenticity in society. Existing studies that focus on
    deepfake detection rely on artifacts or flaws generated by the deepfake
    process which may not be present in novel deepfake models. This necessitates a proactive approach that is more robust and generalizable. Recent
    works on proactive defense rely on deep watermarking, where they embed a Unique Identification (UID) to an image. To verify authenticity, a
    trusted authority needs to decode its hidden UID and cross-reference it to a
    centralized dataset containing all existing UIDs. Overall, its reliance on a
    trusted centralized authority that stores individual UIDs makes it inflexible
    and impedes its widespread adoption. Moreover, this authentication approach has constrained effectiveness when the number of users is limited.
    In this paper, we present Facestamp, a deep watermarking model for a self-reference proactive defense against deepfakes. We address this problem
    by directly embedding facial attributes, instead of a UID, to an image using deep watermarking. Image-derived attributes such as facial attributes
    verify the legitimacy of the image through the identification of inconsistencies between the decoded attributes and current attributes present in the
    image. This eliminates the need for a centralized verification process and
    enables independent verification. In our experiments, we show that Facestamp allows the recovery of facial attributes in the wild and the subsequent
    verification of the current face to determine the legitimacy of the given
    image. Facestamp is able to defend against deepfakes across three deepfake models, showing promising performance in two popular datasets and
    is more robust to common post-processing image operations compared to existing methods.

    Recommendation Letter . . . . . . . . . . . . . . . . . . . . . . . . i Approval Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . v Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1.1 Watermarking Model . . . . . . . . . . . . . . . . 11 3.1.2 Adaptive Distortion Training . . . . . . . . . . . . 14 3.1.3 Channel Coding . . . . . . . . . . . . . . . . . . 16 3.1.4 Watermarking Training Loss . . . . . . . . . . . . 17 4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.1 Implementation Details . . . . . . . . . . . . . . . . . . . 18 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.2.1 Ablation . . . . . . . . . . . . . . . . . . . . . . . 20 4.2.2 Comparison . . . . . . . . . . . . . . . . . . . . . 20 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    [1] R.Tolosana,R.Vera-Rodriguez,J.Fierrez,A.Morales,andJ.Ortega-Garcia,“Deepfakesandbeyond:
    A survey of face manipulation and fake detection,” arXiv preprint arXiv:2001.00179, 2020.
    [2] C. Rathgeb, A. Botaljov, F. Stockhardt, S. Isadskiy, L. Debiasi, A. Uhl, and C. Busch, “Prnu-based
    detection of facial retouching,” IET Biometrics, vol. 9, no. 4, pp. 154–164, 2020.
    [3] X. Zhang, S. Karaman, and S.-F. Chang, “Detecting and simulating artifacts in gan fake images,” in
    2019 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–6, IEEE,
    2019.
    [4] F. Matern, C. Riess, and M. Stamminger, “Exploiting visual artifacts to expose deepfakes and face
    manipulations,”in2019 IEEE Winter Applications of Computer Vision Workshops (WACVW),pp.83–
    92, IEEE, 2019.
    [5] Y. Li and S. Lyu, “Exposing deepfake videos by detecting face warping artifacts,” arXiv preprint
    arXiv:1811.00656, 2018.
    [6] T. Jung, S. Kim, and K. Kim, “Deepvision: deepfakes detection using human eye blinking pattern,”
    IEEE Access, vol. 8, pp. 83144–83154, 2020.
    [7] N. Ruiz, S. A. Bargal, and S. Sclaroff, “Disrupting deepfakes: Adversarial attacks against conditional
    image translation networks and facial manipulation systems,” in European Conference on Computer
    Vision, pp. 236–251, Springer, 2020.
    [8] C.-Y. Yeh, H.-W. Chen, S.-L. Tsai, and S.-D. Wang, “Disrupting image-translation-based deepfake
    algorithms with adversarial attacks,” in Proceedings of the IEEE/CVF Winter Conference on Appli-
    cations of Computer Vision Workshops, pp. 53–62, 2020.
    [9] Z.Chen,L.Xie,S.Pang,Y.He,andB.Zhang,“Magdr: Mask-guideddetectionandreconstructionfor
    defending deepfakes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
    Recognition, pp. 9014–9023, 2021.
    [10] R.Wang,F.Juefei-Xu,M.Luo,Y.Liu,andL.Wang,“Faketagger: Robustsafeguardsagainstdeepfake
    disseminationviaprovenancetracking,”inProceedings of the 29th ACM International Conference on
    Multimedia, pp. 3546–3555, 2021.
    [11] Y. Yang, C. Liang, H. He, X. Cao, and N. Z. Gong, “Faceguard: Proactive deepfake detection,” arXiv
    preprint arXiv:2109.05673, 2021.
    [12] Z. Liu, P. Luo, X. Wang, and X. Tang, “Large-scale celebfaces attributes (celeba) dataset,” Retrieved
    August, vol. 15, no. 2018, p. 11, 2018.
    [13] C.SandersonandB.C.Lovell, “Multi-regionprobabilistichistogramsforrobustandscalableidentity
    inference,” in International conference on biometrics, pp. 199–208, Springer, 2009.
    29
    [14] R. Chen, X. Chen, B. Ni, and Y. Ge, “Simswap: An efficient framework for high fidelity face swap-
    ping,” in Proceedings of the 28th ACM International Conference on Multimedia, pp. 2003–2011,
    2020.
    [15] Y. Nirkin, Y. Keller, and T. Hassner, “Fsgan: Subject agnostic face swapping and reenactment,” in
    Proceedings of the IEEE/CVF international conference on computer vision, pp. 7184–7193, 2019.
    [16] T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep learning for
    deepfakes creation and detection,” arXiv preprint arXiv:1909.11573, vol. 1, 2019.
    [17] P. Korshunov and S. Marcel, “Vulnerability assessment and detection of deepfake videos,” in 2019
    International Conference on Biometrics (ICB), pp. 1–6, IEEE, 2019.
    [18] Y. Huang, F. Juefei-Xu, R. Wang, Q. Guo, L. Ma, X. Xie, J. Li, W. Miao, Y. Liu, and G. Pu, “Fake-
    polisher: Makingdeepfakesmoredetection-evasivebyshallowreconstruction,”inProceedings of the
    28th ACM International Conference on Multimedia, pp. 1217–1226, 2020.
    [19] Y. Huang, F. Juefei-Xu, Q. Guo, X. Xie, L. Ma, W. Miao, Y. Liu, and G. Pu, “Fakeretouch: Evading
    deepfakes detection via the guidance of deliberate noise,” arXiv preprint arXiv:2009.09213, 2020.
    [20] S.Hussain, P.Neekhara, M.Jere, F.Koushanfar, andJ.McAuley, “Adversarialdeepfakes: Evaluating
    vulnerability of deepfake detectors to adversarial examples,” in Proceedings of the IEEE/CVF Winter
    Conference on Applications of Computer Vision, pp. 3348–3357, 2020.
    [21] C. Yang, L. Ding, Y. Chen, and H. Li, “Defending against gan-based deepfake attacks via
    transformation-aware adversarial faces,” arXiv preprint arXiv:2006.07421, 2020.
    [22] N. Ruiz, S. A. Bargal, and S. Sclaroff, “Protecting against image translation deepfakes by leaking
    universal perturbations from black-box neural networks,” arXiv preprint arXiv:2006.06493, 2020.
    [23] E. Segalis, “Disrupting deepfakes with an adversarial attack that survives training,” arXiv preprint
    arXiv:2006.12247, 2020.
    [24] X. Luo, R. Zhan, H. Chang, F. Yang, and P. Milanfar, “Distortion agnostic deep watermarking,” in
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13548–
    13557, 2020.
    [25] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image seg-
    mentation,” in International Conference on Medical image computing and computer-assisted inter-
    vention, pp. 234–241, Springer, 2015.
    [26] M. Tancik, B. Mildenhall, and R. Ng, “Stegastamp: Invisible hyperlinks in physical photographs,” in
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2117–
    2126, 2020.
    [27] K. Choi, K. Tatwawadi, T. Weissman, and S. Ermon, “Necst: neural joint source-channel coding,”
    2018.
    [28] Y.Li, X.Yang, P.Sun, H.Qi, andS.Lyu, “Celeb-df(v2): anewdatasetfordeepfakeforensics,” arXiv
    preprint arXiv:1909.12962, 2019.
    [29] S. I. Serengil and A. Ozpinar, “Lightface: A hybrid deep face recognition framework,” in 2020 Inno-
    vations in Intelligent Systems and Applications Conference (ASYU), pp. 1–5, IEEE, 2020.
    [30] K. A. Zhang, A. Cuesta-Infante, L. Xu, and K. Veeramachaneni, “Steganogan: High capacity image
    steganography with gans,” arXiv preprint arXiv:1901.03892, 2019.

    QR CODE