簡易檢索 / 詳目顯示

研究生: 林怡伶
Yi-Ling Lin
論文名稱: 角膜影像隱私保護及美化之研究
A Study on the Privacy Protection andBeautification of Cornea Images
指導教授: 楊傳凱
Chuan-Kai Yang
口試委員: 林伯慎
Bor-Shen Lin
孫沛立
Pei-Li Sun
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 55
中文關鍵詞: 瞳孔反射瞳孔位置影像超解析物件辨識圖像隱私
外文關鍵詞: corneal reflection, iris location, image super-resolution, object recogntion, image privacy
相關次數: 點閱:258下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著科技的進步,社群軟體的普及率逐年遞增,其中以年輕族群比例居多,部份人們喜歡在室外當下紀錄生活並分享到社群軟體,而這些動作可能會暴露風險而不自知。特別是自拍照將會有極高的機率暴露自身周遭的人事物。為了了解自拍照對自身隱私上的風險,本論文設計了一個自動化架構以更方便取得瞳孔資訊。我們先用 Haar Cascade 演算法獲取人眼區域,接著對該區域用 YOLO 物件偵測取得瞳孔位置,又為了能更清楚的辨識到瞳孔內的資訊,我們對瞳孔位置的圖像進行校正及影像超解析之影像預處理,再經由 Google Vision API來對圖片進行物件辨識來獲得瞳孔內的資訊。實驗結果表示自拍照對自身有暴露風險的可能。針對這些有風險的自拍照,可以透過將影像模糊化來達到消除風險之效果,卻可能會讓降低圖像的美觀程度。因此,我們提出了美化圖像之方法來移除照片中隱私區域,已達到消除隱私卻不會對圖像造成太大影響的效果。


    With the advancement of technology, social media has become more popular year by year, especially when youth like to upload selfies to the Internet where anyone from anywhere can get anything. Such behavior may expose privacy without the owner knowing it. In the case where a selfie photo is exceptionally clear and shinning, there will be a high probability of revealing a person's whereabouts and associated information. In this work, we design a framework to automatically get cornea information. First, we use the Haar Cascade algorithm on the captured eye area and YOLO object detector for
    cornea location. Next, we calibrate the image to make the identification more accurate. In this step, we use Google Vision API for recognization. Experimental results show that we may get some privacy information from a photo. To avoid that, the risk could be diminished by lowering the picture quality. However, it could make the photo blur. We propose a novel method to remove privacy information without evidently affecting the photo outlook.

    ContentsRecommendation Letter i Approval Letter ii Abstract in Chinese iii Abstract in English iv Acknowledgements v Contents vii List of Figures x List of Tables xiii 1 Introduction 1 1.1 Motivation and Purpose 1 1.2 Outline 4 2 Related Work 5 2.1 Corneal Reflection 5 2.2 Iris localization 6 2.3 Object recognition 6 2.4 Image super resolution 7 3 Proposed System 8 3.1 System Overview 8 3.2 Cornea Extraction 9 3.2.1 Eye Area Capture 9 3.2.2 Cornea Localization 11 3.3 Image Correction & Restoration 12 3.3.1 fisheye correction 12 3.3.2 image super resolution 13 3.3.3 image denoising 14 3.4 Risk Analysis 15 3.4.1 distance 15 3.4.2 lightness 16 3.4.3 blurriness 17 3.5 Cornea De-identification 18 4 Experimental Result 22 4.1 Experiments 22 4.2 Dataset 23 4.2.1 image collection 23 4.2.2 image processing 24 4.3 Risk Analysis 25 4.3.1 Light Value25 4.3.2 distance 28 4.3.3 blurriness 29 4.4 Experimental evaluation 30 4.4.1 Comparison of image recognition 30 4.4.2 Comparison of iris localization 31 4.4.3 Ablation Study 34 4.4.4 User Study 36 5 Conclusions 38 References 39

    [1]“Mobile phone reviews, news, specifications and more.”https://www.gsmarena.com/. Accessed:16 December 2020.
    [2]R. Jenkins and C. Kerr, “Identifiable images of bystanders extracted from corneal reflections,” PLOS ONE, vol. 8, pp. 1–5, 12 2013.
    [3]T. N. Mundhenk, M. J. Rivett, X. Liao, and E. L. Hall, “Techniques for fisheye lens calibration using a minimal number of measurements,” in Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision(D. P. Casasent, ed.), vol. 4197, pp. 181 – 190, International Society for Optics and Photonics, SPIE, 2000.
    [4]Amazon, “Ring light product.”https://www.amazon.com/ring-light/s?k=ring+light. published by Amazon, Accessed: 16 December 2020.
    [5]S. W. Grotta and D. Grotta, “Not all pixels are created equal [tools toys],” IEEE Spectrum, vol. 49, no. 5, pp. 22–24, 2012.
    [6]S. Menon, A. Damian, S. Hu, N. Ravi, and C. Rudin, “Pulse: Self-supervised photo upsampling via Latent space exploration of generative models.,” in CVPR, pp. 2434–2442, IEEE, 2020.
    [7]R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997.
    [8]J. Koh, V. Govindaraju, and V. Chaudhary, “A robust iris localization method using an active contour model and hough transform,” in2010 20th International Conference on Pattern Recognition, pp. 2852–2856, 2010.
    [9]X.-D. Zhang, X.-P. Dong, Q. Wang, and Y.-R. Sun, “Location algorithm of RGB iris image based on integro-differential operators,” Dongbei Daxue Xuebao/Journal of Northeastern University, vol. 32, pp. 1550–1553, 11 2011.
    [10]V. Kumar, A. Asati, and A. Gupta, “Iris localization based on integro-differential operator for unconstrained infrared iris images,” in2015InternationalConferenceonSignalProcessing, ComputingandControl (ISPCC), pp. 277–281, 2015.
    [11]J. Daugman, “New methods in iris recognition,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 37, no. 5, pp. 1167–1175, 2007.
    [12]R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Computer Vision and Pattern Recognition, 2014.
    [13]S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems 28(C. Cortes, N. D.Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds.), pp. 91–99, Curran Associates, Inc., 2015.
    [14]K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988, 2017.
    [15]W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot MultiBox detector,” Lecture Notes in Computer Science, p. 21–37, 2016.
    [16]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, 2016.
    [17]C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016.
    [18]C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” in Proceedings of European Conference on Computer Vision (ECCV)(B. Leibe, J. Matas, N. Sebe, and M. Welling, eds.), (Cham), pp. 391–407, Springer International Publishing, 2016.
    [19]J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR Oral), June 2016.
    [20]J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637–1645, 2016.
    [21]C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105–114, 2017.
    [22]B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), pp. 1132–1140, 2017.
    [23]Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proceedings of European Conference on Computer Vision(ECCV)(V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, eds.), (Cham), pp. 294–310, SpringerInternational Publishing, 2018.
    [24]P. L. Kaufman and e. Albert Alm, “Adler’s physiology of the eye: Clinical application. Mosby, tenth edition,” Journal of Neuro-Ophthalmology: December 2004 - Volume 24 - Issue 4 - p 348, 2003.
    [25]P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.CVPR 2001, vol. 1, pp. I–I, 2001.
    [26]R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection,” in Proceedings. International Conference on Image Processing, vol. 1, pp. I–I, 2002.
    [27]Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” inComputationalLearningTheory(P. Vitányi, ed.), (Berlin, Heidelberg), pp. 23–37, Springer Berlin Heidelberg, 1995.
    [28]K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for CNN-based image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608–4622, 2018.
    [29]S. Kelby, The Digital Photography Book, volume 3. Peachpit Press, 2010.
    [30]D. A. Kerr, “Apex—the additive system of photographic exposure.”http://dougkerr.net/Pumpkin/articles/APEX.pdf.
    [31]J. L. Pech-Pacheco, G. Cristobal, J. Chamorro-Martinez, and J. Fernandez-Valdivia, “Diatom autofocusing in brightfield microscopy: a comparative study,” inProceedings15thInternationalConferenceon Pattern Recognition. ICPR-2000, vol. 3, pp. 314–317 vol.3, 2000.
    [32]Google, “Google vision.”https://cloud.google.com/vision/.
    [33]H. Proença and L. A. Alexandre, “Ubiris: A noisy iris image database,” in Proceed. of ICIAP 2005 -Intern. Confer. on Image Analysis and Processing, vol. 1, pp. 970–977, 2005.
    [34]J. Redmon, “darknet.”https://github.com/pjreddie/darknet, 2020.

    無法下載圖示 全文公開日期 2026/01/26 (校內網路)
    全文公開日期 2026/01/26 (校外網路)
    全文公開日期 2026/01/26 (國家圖書館:臺灣博碩士論文系統)
    QR CODE