Basic Search / Detailed Display

Author: 楊雯筑
Wen-Chu Yang
Thesis Title: 社群媒體影片爬蟲與影片去識別化
Efficient Web Video Crawler and Video De-identification
Advisor: 楊傳凱
Chuan-Kai Yang
Committee: 賴源正
Yuan-Cheng Lai
林伯慎
Bor-Shen Lin
Degree: 碩士
Master
Department: 管理學院 - 資訊管理系
Department of Information Management
Thesis Publication Year: 2023
Graduation Academic Year: 111
Language: 英文
Pages: 82
Keywords (in Chinese): 影片爬蟲影像處理物件偵測人臉提取去識別化
Keywords (in other languages): Video Crawler, Image Processing, Object Detection, Face Extraction, De-Identification
Reference times: Clicks: 452Downloads: 0
Share:
School Collection Retrieve National Library Collection Retrieve Error Report
  • 現今網路發達,資訊傳播快速,使得在社群媒體上分享知識技能、日常生活不再是件難事。在各式各樣的分享媒介中,影片成為許多人的選擇,用以與他人分享各式訊息,然而享受著此等便利時,創作者亦可能必須要面對資訊安全的問題。例如影片可能被非法下載、改編、分享,又如知名人物的肖像遭有心人士製成色情影片,造成他人名譽、隱私、心靈上備受侵害,或又如詐欺、假消息等問題亦層出不窮。因此,本論文除蒐集社群媒體的影片相關資訊外,也將站在保護隱私的角度,對於影片人臉的部分進行去識別化,讓上傳影片者,可以在隱私權等權利免於遭受侵害的情況下,放心地製作及分享自己的影片。

    為達到上述目的,本論文利用爬蟲的方式抓取 Facebook 以及 TikTok 的公開影片資訊,並於爬取前,對影片網址做前處理,以提升爬取的效能,且能夠 100% 避免重複爬取相同網址。在蒐集資料的同時,本論文亦會針對儲存到影片資料庫的影片進行影像處理,利用人臉提取、校正,以取得影像中的人臉特徵資料(如:臉部座標、性別、年齡等),並儲存於人臉資料庫中。欲去識別化的影片則會透過物件偵測與人臉提取的方式獲取人臉特徵資料,然後與人臉資料庫既有的資料進行比對,篩選出最為合適的來源人臉,再利用特徵融合的方式處理欲進行去識別化的影片。經實驗後,本系統產生的結果不僅不會被認定是相同的人,與未先進行影片比對即直接進行去識別化的影片相比,也能有效降低被認為是造假的機率。


    The widespread availability of the internet has made it simple for people to share their knowledge, skills, and daily experiences on social media platforms via videos. However, with this convenience comes the risk of information security breaches, such as the illegal downloading, modification, and sharing of videos, as well as the use of celebrities' images in pornographic videos, which can harm their reputation, privacy, and psychological well-being. Furthermore, scams and fake news are abundant on social media. To address these concerns, our paper aims to collect video-related information on social media and de-identify faces in videos to safeguard privacy and other rights, while still allowing creators to share their videos.

    We use web crawlers to collect public video data from Facebook and TikTok. We pre-process URLs to avoid repetitive crawling and increase efficiency. During video collection, we extract and align faces to generate facial data such as facial coordinates, gender, and age, which are stored in a Face Database. To de-identify the video, we perform object detection and face extraction to collect facial data. We then compare this data with existing facial database data to select the most suitable source face for the feature fusion-based de-identification process. According to this study, our system has been found to effectively produce non-identifiable results in experiments and significantly reduces the chances of being identified as fake when compared to de-identified videos without prior video matching.

    Recommendation Letter I Approval Letter II Abstract in Chinese III Abstract in English IV Acknowledgments V Table of Contents VI List of Tables VIII List of Figures X 1 Introduction 1 1.1 Background 1 1.2 Motivation 2 1.3 Purpose 3 1.4 Research Outline 4 2 Related Work 5 2.1 Social Media Crawler 5 2.2 Object Detection 7 2.3 Face Recognition 9 2.4 Image De-identification 12 3 Proposed Method 15 3.1 System Overview 15 3.2 URL Preprocessing 17 3.2.1 URL Normalization 17 3.2.2 URL Uniqueness 18 3.3 Facebook Video Crawler 18 3.3.1 Keyword Search 21 3.3.2 Application Programming Interface 22 3.4 TikTok Video Crawler 23 3.5 Face Collection 25 3.5.1 Frame Comparison 25 3.5.2 Face Extraction and Alignment 26 3.5.3 Face Filtering 28 3.5.4 Face Registration and DB Storage 31 3.6 Video De-identification 34 3.6.1 Object Detection 34 3.6.2 Face Recognition 36 3.6.3 Data Comparison 36 3.6.4 Face De-identification 37 4 Experiments 42 4.1 System Environment 42 4.2 Database 43 4.3 Experimental Results and Evaluation 44 4.3.1 Experiment 1: URL preprocessing 46 4.3.2 Experiment 2: Video crawler execution time 47 4.3.3 Experiment 3: Mulit-threading 47 4.3.4 Experiment 4: Face similarity 48 4.3.5 Experiment 5: Gender-based de-identification 51 4.3.6 Experiment 6: Age-based de-identification 54 4.3.7 Experiment 7: Method-based de-identification 57 4.3.8 Experiment 8: De-identification of multiple faces 60 5 Conclusion & Future Work 62 References 64

    [1] We Are Social, “Digital 2023 global overview report.” https://wearesocial. com/us/blog/2023/01/digital-2023/, 2023. Accessed on June 2023.
    [2] B. Iancu, “Web crawler for indexing video e-learning resources: A youtube case study,” Informatica Economica, vol. 23, no. 2, pp. 15–24, 2019.
    [3] P. Pratikakis, “twawler: A lightweight twitter crawler,” arXiv preprint arXiv:1804.07748, 2018.
    [4] tlyu0419, “facebook_crawler.” https://github.com/tlyu0419/facebook_ crawler. Accessed on June 2023
    [5] S. I. Bhat, T. Arif, M. B. Malik, and A. A. Sheikh, “Browser simulation-based crawler for online social network profile extraction,” International Journal of Web Based Communities, vol. 16, no. 4, pp. 321–342, 2020.
    [6] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, pp. 91–110, 2004.
    [7] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1, pp. 886–893, Ieee, 2005.
    [8] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, pp. 273–297, 1995.
    [9] Y. Freund, R. Schapire, and N. Abe, “A short introduction to boosting,” Journal-Japanese Society For Artificial Intelligence, vol. 14, no. 771-780, p. 1612, 1999.
    [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014.
    [11] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), December 2015.
    [12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015.
    [13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788, 2016.
    [14] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 21–37, Springer, 2016.
    [15] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv preprint arXiv:2207.02696, 2022.
    [16] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li, “S3fd: Single shot scale-invariant face detector,” in Proceedings of the IEEE international conference on computer vision, pp. 192–201, 2017.
    [17] J. Guo, J. Deng, A. Lattas, and S. Zafeiriou, “Sample and computation redistribution for efficient face detection,” arXiv preprint arXiv:2105.04714, 2021.
    [18] J. Li, Y. Wang, C. Wang, Y. Tai, J. Qian, J. Yang, C. Wang, J. Li, and F. Huang, “Dsfd: dual shot face detector,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5060–5069, 2019.
    [19] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, “Retinaface: Single-shot multi-level face localisation in the wild,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5203–5212, 2020.
    [20] S. Yang, P. Luo, C.-C. Loy, and X. Tang, “Wider face: A face detection benchmark,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5525–5533, 2016.
    [21] Papers with Code, “Face detection on wider face (easy).” https: //paperswithcode.com/sota/face-detection-on-wider-face-easy . Accessed on June 2023.
    [22] Papers with Code, “Face detection on wider face (medium).” https:// paperswithcode.com/sota/face-detection-on-wider-face-medium . Accessed on June 2023.
    [23] Papers with Code, “Face detection on wider face (hard).” https: //paperswithcode.com/sota/face-detection-on-wider-face-hard . Accessed on June 2023.
    [24] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1701–1708, 2014.
    [25] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 212–220, 2017.
    [26] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5265–5274, 2018.
    [27] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4690–4699, 2019.
    [28] T. Li and L. Lin, “Anonymousnet: Natural face de-identification with measurable privacy,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 0–0, 2019.
    [29] B. Zhu, H. Fang, Y. Sui, and L. Li, “Deepfakes for medical video de-identification: Privacy protection and diagnostic information preservation,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 414– 420, 2020.
    [30] Y. Wen, B. Liu, R. Xie, J. Cao, and L. Song, “Deep motion flow aided face video de-identification,” in 2021 International Conference on Visual Communications and Image Processing (VCIP), pp. 1–5, IEEE, 2021.
    [31] C. Hazirbas, J. Bitton, B. Dolhansky, J. Pan, A. Gordo, and C. C. Ferrer, “Casual conversations: A dataset for measuring fairness in ai,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2289–2293, 2021.
    [32] B. Yan, M. Pei, and Z. Nie, “Attributes preserving face de-identification.,” in ICCV Workshops, pp. 1217–1221, 2019.
    [33] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019.
    [34] J. Deng, J. Guo, X. An, J. Yu, and B. Gecer, “Insightface: 2d and 3d face analysis project.” https://github.com/deepinsight/insightface. Accessed on June 2023.
    [35] Meta, “Meta brand resources - facebook app reactions.” https://about.meta. com/brand/resources/facebookapp/reactions/. Accessed on June 2023.
    [36] CNN. https://www.facebook.com/watch/?v=532468918990492 . Accessed on June 2023.
    [37] T. Jantunen, J. Mesch, A. Puupponen, and J. Laaksonen, “On the rhythm of head movements in finnish and swedish sign language sentences,” in Speech Prosody 8, Boston, USA, May 31-June 3, 2016, pp. 850–853, The International Speech Communication Association (ISCA), 2016.
    [38] 樓頂揪樓下. https://www.facebook.com/watch/?v=910182190171379 . Accessed on June 2023.
    [39] 香蕉王俊傑. https://www.facebook.com/watch/?v=341854353255979 . Accessed on June 2023.
    [40] 壹週刊寶庫. https://www.facebook.com/watch/?v=10204245327131113 . Accessed on June 2023.
    [41] D. Dahmani, M. Cheref, and S. Larabi, “Zero-sum game theory model for segmenting skin regions,” Image and Vision Computing, vol. 99, p. 103925, 2020.
    [42] 台視新聞台. https://www.facebook.com/watch/?v=3347213108928974 . Accessed on June 2023.
    [43] 告五人Accusefive. https://www.facebook.com/watch/?v=647679349758775 . Accessed on June 2023.
    [44] CNN. https://www.facebook.com/watch/?v=856530589107293 . Accessed on June 2023.
    [45] Entertainment Tonight. https://www.facebook.com/watch/?v= 1473664279767454 . Accessed on June 2023.
    [46] 國家地理. https://www.facebook.com/watch/?v=2348167605463677 . Accessed on June 2023.
    [47] Yahoo奇摩小當家. https://www.facebook.com/watch?v=858587758535748 . Accessed on June 2023.
    [48] The Late Late Show with James Corden. https://www.facebook.com/watch/ ?v=201514502481365 . Accessed on June 2023.
    [49] J. Runkel, “Building mongodb applications with binary files using gridfs: Part 2.” https://www.mongodb.com/blog/post/ building-mongodb-applications-binary-files-using-gridfs-part-2 . Accessed on June 2023.
    [50] AWS, “Amazon rekognition.” https://aws.amazon.com/tw/rekognition/. Accessed on June 2023.
    [51] 壹週刊寶庫. https://www.facebook.com/watch/?v=196808931382179 . Accessed on June 2023.
    [52] Entertainment Tonight. https://www.facebook.com/watch/?v= 1473664279767454 . Accessed on June 2023.
    [53] HahaTai 哈哈台. https://www.facebook.com/watch/?v=589257332852776 . Accessed on June 2023.
    [54] 即新聞. https://www.facebook.com/watch/?v=778649606987379 . Accessed on June 2023.
    [55] “Scan & detect deepfake videos.” https://scanner.deepware.ai/ . Accessed on June 2023.
    [56] “Avatarify ai face animator.” https://avatarify.ai/ . Accessed on June 2023.
    [57] selimsef, “dfdc_deepfake_challenge.” https://github.com/selimsef/dfdc_ deepfake_challenge. Accessed on June 2023.
    [58] CNN. https://www.facebook.com/watch/?v=251708232088701 . Accessed on June 2023.
    [59] CNN. https://www.facebook.com/watch/?v=495640874559082 . Accessed on June 2023.
    [60] 1111人力銀行. https://www.facebook.com/watch/?v=2174562289332723 . Accessed on June 2023.
    [61] 週日閱讀科學大師. https://www.facebook.com/watch/?v= 1157529501842058 . Accessed on June 2023.
    [62] TED. https://www.facebook.com/watch/?v=880508083167607 . Accessed on June 2023.
    [63] I. Perov, D. Gao, N. Chervoniy, K. Liu, S. Marangonda, C. Umé, M. Dpfks, C. S. Facenheim, L. RP, J. Jiang, et al., “Deepfacelab: Integrated, flexible and extensible face-swapping framework,” arXiv preprint arXiv:2005.05535, 2020.
    [64] CNN International. https://www.facebook.com/watch/?v= 1429849374458522 . Accessed on June 2023.
    [65] The Graham Norton Show. https://www.facebook.com/watch/?v= 812076196098382 . Accessed on June 2023.
    [66] 噓!星聞. https://www.facebook.com/watch/?v=2955587711421373 . Accessed on June 2023.

    無法下載圖示 Full text public date 2025/07/21 (Intranet public)
    Full text public date 2025/07/21 (Internet public)
    Full text public date 2025/07/21 (National library)
    QR CODE