簡易檢索 / 詳目顯示

研究生: 蔡佳吟
Jia-Yin Cai
論文名稱: 結合錯誤級別分析與注意力機制的即時人臉深度偽造影片檢測
A Real Time Face Deepfake Detection Based on Error Level Analysis and Attention Mechanism
指導教授: 洪西進
XI-JIN HONG
口試委員: 謝仁偉
REN-WEI XIE
楊竹星
ZHU-XING YANG
李正吉
ZHENG-JI LI
林祝興
ZHU-XING LIN
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 44
中文關鍵詞: 偽造影片辨識深度學習圖像分類注意力機制錯誤級別分析
外文關鍵詞: DeepFake Detection, Attention, Error Level Analysis, Deep Learning, Image Classification
相關次數: 點閱:245下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年,DeepFake技術於影視、藝術產業帶來重要的貢獻,但相對的也越來越多使用DeepFake技術引起的各式假新聞、假資訊漫天飛舞,使得人們在資訊爆炸的時代無法冷靜判斷消息的真假,因此這也是為什麼能否準確地區分透過DeepFake製作的偽造影片變得非常重要的原因。無論是學術界還是業界皆有提出考慮不同面向的DeepFake Detection方法,也取得還不錯的效果,但隨著人工智慧和深度學習的飛速發展,DeepFake技術也不間歇地繼續發展,使得製作偽造影片的新方法層出不窮,導致以前的DeepFake Detection模型變得效果不好。DeepFake技術越來越多,檢測模型於單一資料集上取得良好準確率已不稀奇,能否獲得良好的魯棒性與泛化性更應關注。

    在本研究中,通過組合多個現有的DeepFake資料集與自行生成的資料集,建立一個包含12種不同DeepFake技術的專屬資料集;其中專屬資料集將實現真偽資料數量平衡及來自各資料集之數量不可相差甚多,防止檢測模型過度擬合於真偽其中一個類別、特定人臉或特定DeepFake技術,用以實現高魯棒性及高泛化性。此外,本研究還提出了一種新的DeepFake Detection模型,即ElaEfficentNetV2Att,它是基於EfficentNetV2架構,結合錯誤級別分析(Error Level Analysis, ELA)與注意力機制(Attention Mechanism)所建立之模型;其中先使用ELA來為EfficentNetV2模型指出哪些特徵區域可區分出真實或偽造至關重要,因為每種DeepFake技術能用來區分的特徵區域可能有所不同,接著再使用Attention特別關注這些特徵區域用以提升準確度。實驗結果顯示,於四個不同的測試資料集上的平均準確率為74.52%,而ElaEfficentNetV2Att相比其他檢測模型為最穩健的模型;雖然ElaEfficentNetV2Att無法超越使用特定單一資料集訓練之模型的準確性,盡管如此ElaEfficentNetV2Att仍為目前實際應用上最好的模型,因為無論是於Jetson Xavier NX邊緣運算設備上計算一張照片只需花費0.08秒,其泛化能力相較過往十幾個檢測模型為最好的,都說明了ElaEfficentNetV2Att能同時兼顧良好的準確率、即時性、泛化性。


    In recent years, DeepFake technology has made important contributions to the movie and art industries, but relatively more and more fake news and fake information caused by DeepFake technology. However, with the rapid development of artificial intelligence and deep learning, DeepFake technology continues to develop intermittently, making it easy to make fake videos. As a result, the previous DeepFake Detection model has become ineffective, and there are more and more DeepFake technologies, so that it is hackneyed for the detection model to achieve good accuracy on a single data set, but we must pay more attention to the detection model in good robustness and generalization.

    In this research, by combining multiple existing DeepFake datasets and self-generated datasets, an exclusive dataset containing 12 different DeepFake technologies is established. However, the exclusive data set needs to be pre-processed to prevent the model from overfitting. In addition, this research also proposes a new DeepFake Detection model, namely ElaEfficentNetV2Att. ElaEfficentNetV2Att is a model based on the EfficentNetV2 architecture, combining Error Level Analysis (ELA) and Attention Mechanism. ELA is used to indicate which feature areas can distinguish real or fake for the EfficentNetV2 model, because the feature regions that each DeepFake technique can use to distinguish may be different, and then use Attention to pay special attention to these feature regions to improve accuracy. Experimental results show that ElaEfficentNetV2Att is the most robust model compared to other detection models with an average accuracy of 74.52% in four different test datasets. Although ElaEfficentNetV2Att cannot surpass the accuracy of models trained with a specific single dataset, but it still is currently the best model in practical applications. It only takes 0.08 seconds to calculate a frame on the Jetson Xavier NX edge computing device, and the generalization ability is the best compared to more than a dozen detection models in the past. Overall, ElaEfficentNetV2Att can take into account good accuracy, immediacy and generalization at the same time.

    致 謝 辭 i 摘 要 ii ABSTRACT iii 目 錄 iv 圖 目 錄 vi 表 目 錄 vii 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 1.3 研究架構 2 第二章 文獻探討 3 2.1 Video Manipulation Methods 3 2.2 Manipulated Video Detection Methods 6 2.3 DeepFake Video Datasets 7 2.4 Convolution Neural Networks 10 2.5 Error Level Analysis 12 2.6 Attention Mechanism 15 2.7 Facenet-Pytorch MTCNN 16 第三章 研究方法 18 3.1 軟硬體環境 18 3.2 DataSet 18 3.2.1 生成新資料集 18 3.2.2 訓練集與測試集 19 3.2.3 資料前處理 21 3.3 ELAEfficientNetV2Att 22 第四章 研究結果 25 4.1 Experimental Results 25 4.2 Performance 25 4.3 Generalization Ability 26 4.4 Real Time 27 4.5 Ablation Study 27 第五章 結論與建議 29 參考文獻 30

    [1] “小玉Deepfake換臉色情片風暴 YouTube官方不再沉默:無限期停止頻道營利資格! | 社群網路 | 數位 | 聯合新聞網.” https://udn.com/news/story/7088/5863556 (accessed Jul. 23, 2022).
    [2] “俄烏開戰/第4戰場「網路資訊戰」 各式假訊息漫天飛 | 公視新聞網 PNN.” https://news.pts.org.tw/article/570211 (accessed Jul. 23, 2022).
    [3] “Deepfake假造澤倫斯基投降影片 專家:恐是俄資訊戰冰山一角 | 俄烏戰火延燒 | 全球 | 聯合新聞網.” https://udn.com/news/story/122663/6172771 (accessed Jul. 23, 2022).
    [4] I. J. Goodfellow et al., “Generative Adversarial Networks.” arXiv, Jun. 10, 2014. doi: 10.48550/arXiv.1406.2661.
    [5] “成長超過 330%,美英成重災區!一文了解 Deepfake 2020 發展現狀 | TechNews 科技新報.” https://technews.tw/2020/11/28/deepfake-2020-development-status/ (accessed Jul. 23, 2022).
    [6] “Deepfake換臉技術是什麼、最早用在哪?電影《玩命關頭》靠它重現保羅·沃克 | 網路人氣話題 | DailyView 網路溫度計.” https://dailyview.tw/Popular/Detail/12180 (accessed Jul. 23, 2022).
    [7] “Deepfake Detection Challenge.” https://kaggle.com/competitions/deepfake-detection-challenge (accessed Jul. 23, 2022).
    [8] “遏Deepfake偽造性影片 行政院拍板《刑法》修正 | 金融脈動 | 金融 | 經濟日報.” https://money.udn.com/money/story/5613/6154142?from=edn_referralnews_story_ch12017 (accessed Jul. 23, 2022).
    [9] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.” arXiv, Sep. 11, 2020. doi: 10.48550/arXiv.1905.11946.
    [10] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.” arXiv, Jun. 03, 2021. doi: 10.48550/arXiv.2010.11929.
    [11] W. Zaremba, I. Sutskever, and O. Vinyals, “Recurrent Neural Network Regularization.” arXiv, Feb. 19, 2015. doi: 10.48550/arXiv.1409.2329.
    [12] N. Krawetz, “A Picture ’ s Worth . . . Digital Image Analysis and Forensics Version 2,” 2007. https://www.semanticscholar.org/paper/A-Picture-%E2%80%99-s-Worth-.-.-.-Digital-Image-Analysis-2-Krawetz/ecbca666dd3f2590942389c6a3b1bbb74a138173 (accessed Jul. 23, 2022).
    [13] L. Li et al., “Face X-ray for More General Face Forgery Detection.” arXiv, Apr. 18, 2020. doi: 10.48550/arXiv.1912.13458.
    [14] Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues.” arXiv, Oct. 27, 2020. doi: 10.48550/arXiv.2007.09355.
    [15] H. Khalid, S. Tariq, M. Kim, and S. S. Woo, “FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset.” arXiv, Mar. 01, 2022. doi: 10.48550/arXiv.2108.05080.
    [16] P. Kwon, J. You, G. Nam, S. Park, and G. Chae, “KoDF: A Large-scale Korean DeepFake Detection Dataset,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 10724–10733. doi: 10.1109/ICCV48922.2021.01057.
    [17] B. Zi, M. Chang, J. Chen, X. Ma, and Y.-G. Jiang, “WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection.” arXiv, Jan. 05, 2021. doi: 10.48550/arXiv.2101.01456.
    [18] L. Jiang, R. Li, W. Wu, C. Qian, and C. C. Loy, “DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection.” arXiv, Dec. 11, 2020. doi: 10.48550/arXiv.2001.03024.
    [19] B. Dolhansky et al., “The DeepFake Detection Challenge (DFDC) Dataset.” arXiv, Oct. 27, 2020. doi: 10.48550/arXiv.2006.07397.
    [20] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics.” arXiv, Mar. 16, 2020. doi: 10.48550/arXiv.1909.12962.
    [21] A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “FaceForensics++: Learning to Detect Manipulated Facial Images.” arXiv, Aug. 26, 2019. doi: 10.48550/arXiv.1901.08971.
    [22] P. Korshunov and S. Marcel, “DeepFakes: a New Threat to Face Recognition? Assessment and Detection,” Dec. 2018. Accessed: Jul. 23, 2022. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv181208685K
    [23] X. Yang, Y. Li, and S. Lyu, “Exposing Deep Fakes Using Inconsistent Head Poses.” arXiv, Nov. 13, 2018. doi: 10.48550/arXiv.1811.00661.
    [24] D. Nigeria, “Fake-Detection-dataset-for-deepfake-from-Google-and-Jigsaw.” Apr. 06, 2020. Accessed: Jul. 23, 2022. [Online]. Available: https://github.com/DataScienceNigeria/Fake-Detection-dataset-for-deepfake-from-Google-and-Jigsaw
    [25] Y. Blau and T. Michaeli, “Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff.” arXiv, Jul. 30, 2019. doi: 10.48550/arXiv.1901.07821.
    [26] N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete Cosine Transform,” IEEE Transactions on Computers, vol. C–23, no. 1, pp. 90–93, Jan. 1974, doi: 10.1109/T-C.1974.223784.
    [27] R. Rafique, M. Nawaz, H. Kibriya, and M. Masood, “DeepFake Detection Using Error Level Analysis and Deep Learning,” in 2021 4th International Conference on Computing & Information Sciences (ICCIS), Nov. 2021, pp. 1–4. doi: 10.1109/ICCIS54243.2021.9676375.
    [28] W. Zhang, C. Zhao, and Y. Li, “A Novel Counterfeit Feature Extraction Technique for Exposing Face-Swap Images Based on Deep Learning and Error Level Analysis,” Entropy, vol. 22, no. 2, Art. no. 2, Feb. 2020, doi: 10.3390/e22020249.
    [29] W. Zhang and C. Zhao, “Exposing Face-Swap Images Based on Deep Learning and ELA Detection,” Proceedings, vol. 46, no. 1, Art. no. 1, 2019, doi: 10.3390/ecea-5-06684.
    [30] T. S. Gunawan, S. A. M. Hanafiah, M. Kartiwi, N. Ismail, N. F. Za’bah, and A. N. Nordin, “Development of Photo Forensics Algorithm by Detecting Photoshop Manipulation using Error Level Analysis,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 7, no. 1, Art. no. 1, Jul. 2017, doi: 10.11591/ijeecs.v7.i1.pp131-137.
    [31] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, Oct. 2016, doi: 10.1109/LSP.2016.2603342.
    [32] T. Esler, “Face Recognition Using Pytorch.” Jul. 22, 2022. Accessed: Jul. 23, 2022. [Online]. Available: https://github.com/timesler/facenet-pytorch
    [33] A. Rosebrock, “imutils: A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, displaying Matplotlib images, sorting contours, detecting edges, and much more easier with OpenCV and both Python 2.7 and Python 3.” Accessed: Jul. 23, 2022. [Online]. Available: https://github.com/jrosebr1/imutils
    [34] “dlib C++ Library.” http://dlib.net/ (accessed Jul. 23, 2022).
    [35] H.-E. Sun, “卷積神經網路 (CNN) 的發展,” Taiwan AI Academy, Mar. 25, 2020. https://medium.com/ai-academy-taiwan/%E5%8D%B7%E7%A9%8D%E7%A5%9E%E7%B6%93%E7%B6%B2%E8%B7%AF-cnn-%E7%9A%84%E7%99%BC%E5%B1%95-4c5d29e60c55 (accessed Jul. 23, 2022).
    [36] Y. LeCun et al., “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, Dec. 1989, doi: 10.1162/neco.1989.1.4.541.
    [37] M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training.” arXiv, Jun. 23, 2021. doi: 10.48550/arXiv.2104.00298.
    [38] A. Brock, S. De, S. L. Smith, and K. Simonyan, “High-Performance Large-Scale Image Recognition Without Normalization.” arXiv, Feb. 11, 2021. doi: 10.48550/arXiv.2102.06171.
    [39] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 2261–2269. doi: 10.1109/CVPR.2017.243.
    [40] F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions.” arXiv, Apr. 04, 2017. doi: 10.48550/arXiv.1610.02357.
    [41] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.” arXiv, Aug. 23, 2016. doi: 10.48550/arXiv.1602.07261.
    [42] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision.” arXiv, Dec. 11, 2015. doi: 10.48550/arXiv.1512.00567.
    [43] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv, Apr. 10, 2015. doi: 10.48550/arXiv.1409.1556.
    [44] C. Szegedy et al., “Going Deeper with Convolutions.” arXiv, Sep. 16, 2014. doi: 10.48550/arXiv.1409.4842.
    [45] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, 2012, vol. 25. Accessed: Jul. 23, 2022. [Online]. Available: https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html
    [46] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, “Recurrent Models of Visual Attention.” arXiv, Jun. 24, 2014. doi: 10.48550/arXiv.1406.6247.
    [47] D. Bahdanau, K. H. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate: 3rd International Conference on Learning Representations, ICLR 2015,” Jan. 2015. Accessed: Jul. 23, 2022. [Online]. Available: http://www.scopus.com/inward/record.url?scp=85062889504&partnerID=8YFLogxK
    [48] K. Xu et al., “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.” arXiv, Apr. 19, 2016. doi: 10.48550/arXiv.1502.03044.
    [49] M.-T. Luong, H. Pham, and C. D. Manning, “Effective Approaches to Attention-based Neural Machine Translation.” arXiv, Sep. 20, 2015. doi: 10.48550/arXiv.1508.04025.
    [50] K. Ahmed, N. S. Keskar, and R. Socher, “Weighted Transformer Network for Machine Translation.” arXiv, Nov. 06, 2017. doi: 10.48550/arXiv.1711.02132.
    [51] A. Vaswani et al., “Attention Is All You Need.” arXiv, Dec. 05, 2017. doi: 10.48550/arXiv.1706.03762.
    [52] J. Liu, Y. Chen, K. Liu, and J. Zhao, “Event Detection via Gated Multilingual Attention Mechanism,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, Art. no. 1, Apr. 2018, doi: 10.1609/aaai.v32i1.11919.
    [53] R. Roy, I. Joshi, A. Das, and A. Dantcheva, “3D CNN Architectures and Attention Mechanisms for Deepfake Detection,” in Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks, C. Rathgeb, R. Tolosana, R. Vera-Rodriguez, and C. Busch, Eds. Cham: Springer International Publishing, 2022, pp. 213–234. doi: 10.1007/978-3-030-87664-7_10.
    [54] B. Chen, T. Li, and W. Ding, “Detecting deepfake videos based on spatiotemporal attention and convolutional LSTM,” Information Sciences, vol. 601, pp. 58–70, Jul. 2022, doi: 10.1016/j.ins.2022.04.014.
    [55] H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, and N. Yu, “Multi-attentional Deepfake Detection,” Mar. 2021, doi: 10.48550/arXiv.2103.02406.
    [56] A. Khormali and J.-S. Yuan, “ADD: Attention-Based DeepFake Detection Approach,” Big Data and Cognitive Computing, vol. 5, no. 4, Art. no. 4, Dec. 2021, doi: 10.3390/bdcc5040049.
    [57] A. Das, S. Das, and A. Dantcheva, “Demystifying Attention Mechanisms for Deepfake Detection,” in 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Dec. 2021, pp. 1–7. doi: 10.1109/FG52635.2021.9667026.
    [58] “讓阿湯哥真假難辨的 Deepfake 技術,是什麼?如何捍衛隱私? – 資安趨勢部落格.” https://blog.trendmicro.com.tw/?p=63452 (accessed Jul. 23, 2022).
    [59] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Deepfakes and beyond: A Survey of face manipulation and fake detection,” Information Fusion, vol. 64, pp. 131–148, Dec. 2020, doi: 10.1016/j.inffus.2020.06.014.
    [60] “ThisPersonDoesNotExist - Random AI Generated Photos of Fake Persons.” https://this-person-does-not-exist.com/en (accessed Jul. 23, 2022).
    [61] “How Instagram’s AR filters became the new route to internet stardom | London Evening Standard | Evening Standard.” https://www.standard.co.uk/tech/instagram-filters-disney-2020-resolutions-trend-where-next-a4342366.html (accessed Jul. 23, 2022).
    [62] Y. Nirkin, Y. Keller, and T. Hassner, “FSGAN: Subject Agnostic Face Swapping and Reenactment.” arXiv, Aug. 16, 2019. doi: 10.48550/arXiv.1908.05932.
    [63] deepfakes, “deepfakes_faceswap.” Jul. 22, 2022. Accessed: Jul. 23, 2022. [Online]. Available: https://github.com/deepfakes/faceswap
    [64] J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2Face: Real-time Face Capture and Reenactment of RGB Videos.” arXiv, Jul. 29, 2020. doi: 10.48550/arXiv.2007.14808.
    [65] L. Li, J. Bao, H. Yang, D. Chen, and F. Wen, “FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping.” arXiv, Sep. 15, 2020. doi: 10.48550/arXiv.1912.13457.
    [66] Y. Li, M.-C. Chang, and S. Lyu, “In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Dec. 2018, pp. 1–7. doi: 10.1109/WIFS.2018.8630787.
    [67] S. Hu, Y. Li, and S. Lyu, “Exposing GAN-generated Faces Using Inconsistent Corneal Specular Highlights.” arXiv, Oct. 12, 2020. doi: 10.48550/arXiv.2009.11924.
    [68] F. Matern, C. Riess, and M. Stamminger, “Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations,” in 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), Jan. 2019, pp. 83–92. doi: 10.1109/WACVW.2019.00020.
    [69] D. Coccomini, N. Messina, C. Gennaro, and F. Falchi, “Combining EfficientNet and Vision Transformers for Video Deepfake Detection,” vol. 13233, 2022, pp. 219–229. doi: 10.1007/978-3-031-06433-3_19.
    [70] R. Chen, X. Chen, B. Ni, and Y. Ge, “SimSwap: An Efficient Framework For High Fidelity Face Swapping,” in Proceedings of the 28th ACM International Conference on Multimedia, Oct. 2020, pp. 2003–2011. doi: 10.1145/3394171.3413630.
    [71] N. Bonettini, E. D. Cannas, S. Mandelli, L. Bondi, P. Bestagini, and S. Tubaro, “Video Face Manipulation Detection Through Ensemble of CNNs,” in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 5012–5019. doi: 10.1109/ICPR48806.2021.9412711.
    [72] 吳財俊。「Error Level Analysis As A Guide Mask For Robust Deepfake Detection」。碩士論文,國立臺灣科技大學資訊工程系,2021。https://hdl.handle.net/11296/2f4ebw。
    [73] P. Zhou, X. Han, V. I. Morariu, and L. S. Davis, “Two-Stream Neural Networks for Tampered Face Detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jul. 2017, pp. 1831–1839. doi: 10.1109/CVPRW.2017.229.
    [74] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “MesoNet: a Compact Facial Video Forgery Detection Network,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Dec. 2018, pp. 1–7. doi: 10.1109/WIFS.2018.8630761.
    [75] Y. Li and S. Lyu, “Exposing DeepFake Videos By Detecting Face Warping Artifacts.” arXiv, May 22, 2019. doi: 10.48550/arXiv.1811.00656.
    [76] H. H. Nguyen, F. Fang, J. Yamagishi, and I. Echizen, “Multi-task Learning For Detecting and Segmenting Manipulated Facial Images and Videos.” arXiv, Jun. 17, 2019. doi: 10.48550/arXiv.1906.06876.
    [77] H. H. Nguyen, J. Yamagishi, and I. Echizen, “Capsule-Forensics: Using Capsule Networks to Detect Forged Images and Videos.” arXiv, Oct. 26, 2018. doi: 10.48550/arXiv.1810.11215.
    [78] I. Masi, A. Killekar, R. M. Mascarenhas, S. P. Gurudatt, and W. AbdAlmageed, “Two-branch Recurrent Network for Isolating Deepfakes in Videos.” arXiv, Sep. 03, 2020. doi: 10.48550/arXiv.2008.03412.
    [79] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, “ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2020, pp. 11531–11539. doi: 10.1109/CVPR42600.2020.01155.

    QR CODE