簡易檢索 / 詳目顯示

研究生: 詹宏澤
Hung-Tse Chan
論文名稱: 一個不確定性估計的回饋式學習之臨床痤瘡分級方法
A Feedback Learning Method with Uncertainty Estimation for Clinical Acne Grading
指導教授: 陳永耀
Yung-Yao Chen
口試委員: 夏至賢
Chih-Hsien Hsia
吳晋賢
Chin-Hsien Wu
林敬舜
Ching-Shun Lin
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 133
中文關鍵詞: 痤瘡醫學臨床影像深度學習半監督學習資料稀性標記一致性不確定性估計
外文關鍵詞: acne, medical clinical image, deep learning, semi-supervised learning, data scarcity, label consistency, uncertainty estimation
相關次數: 點閱:232下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 皮膚疾病的準確嚴重程度分級對於精準醫療而言有著關鍵影響。痤瘡是常見的皮膚疾病,人工診斷通常可以根據原發性和繼發性情形作為標準。然而,痤瘡的病灶特徵具有相似性、醫師的臨床經驗或精神狀況具有差異性,因此需要一個具備客觀性的痤瘡分級方法輔佐醫師進行診斷。為了解決上述問題,本研究提出一種帶有不確定性估計的回饋式學習之臨床痤瘡分級方法,對於標記具有稀缺性的醫療影像而言,可以改善對於標記資料的依賴性;而對於專業標記存在差異性的問題,不確定性估計方式可以有效地改善標記資料不一致;最後,自適應邊距損失函數可以有效改善高度類間相似性和高度類內變化性情形。除此之外,本研究除了使用公開資料集進行實驗評估,也自行地建立一個名為 ACNE-ECK 的痤瘡影像資料集,並且由多位專業醫師進行標記,並規劃一套完整的去識別化流程,以保護病患隱私。從實驗結果可以得知,本研究方法與 SOTA CNN 方法相比,可提升 29.16% 的準確率。


    The accurate severity grading of skin diseases has a crucial impact on precision medicine. Acne is a common skin condition, and artificial diagnosis is usually based on primary and secondary manifestations. However, acne lesions share similarities, while the clinical experience or mental state of physicians may vary. Therefore, an objective acne grading method is needed to assist physicians in diagnosis. To address these issues, this study proposes a feedback learning method with uncertainty estimation for acne grading. It improves reliance on scarce labeled medical images and effectively handles inconsistencies in professional annotations. Furthermore, the adaptive margin loss function is employed to handle high inter-class similarity and intra-class variability. In addition to evaluating the proposed method on publicly available datasets, this study also establishes a self-created acne image dataset called ACNE-ECK. The dataset is annotated by multiple expert physicians, and a comprehensive de-identification process is implemented to protect patient privacy. Experimental results show that the proposed method outperforms the state-of-the-art convolutional neural network (CNN) method by improving accuracy by 29.16%.

    指導教授推薦書 I 學位考試委員審定書 II 論文摘要 III ABSTRACT IV 誌謝 V 目錄 VII 圖目錄 X 表目錄 XII 第一章、 介紹 1 一、研究動機 1 二、研究問題 9 三、研究文獻和摘要 10 四、研究貢獻 14 第二章、 相關工作 15 一、臨床醫學疾病診斷的分類任務 15 二、痤瘡定義和程度分級標準 18 三、Semi-supervised Learning (SSL) 23 (一)、Self-training in Semi-supervised Learning 24 (二)、Consistency Regularization in SSL 27 (三)、Hybrid Methods in Semi-supervised Learning 33 四、Class-imbalanced in Semi-supervised Learning 35 (一)、Pseudo-labeling for Class-imbalanced 36 (二)、Balanced Classifer for Class-imbalanced 38 五、Data Augmentaion in Semi-supervised Learning 39 六、Uncertainty Estimation in Machine Learning and Medical Domain 42 第三章、 研究方法 46 一、Overview 47 二、Important Information Enhancement (IIE) 47 三、Teacher-Student Training Framework 51 四、Deal with High Inter-class and High Intra-class Problem 58 五、Uncertainty Estimation for Label Noise 61 六、A Lightweight Training Framework 64 七、Data Augmentation for Training Improvement in Acne Grading Task 69 (一)、Boosting Teacher-Student Framework 69 (二)、Boosting Knowledge Distillation 70 第四章、 實驗結果 72 一、資料集 72 (一)、資料收集 77 (二)、去識別化處理 81 (三)、資料標記與痤瘡分級標準 84 二、評估標準 88 三、實驗細節 88 (一)、基本資訊 88 (二)、Hyper-parameters 89 四、主要結果 91 五、消融研究 95 第五章、 結論 Conclusion 100 第六章、 未來展望 101 (一)、跨領域知識 101 (二)、醫學影像資料集 101 (三)、臨床適用性 102 (四)、疾病盛行率與嚴重程度 102 第七章、 參考文獻 104

    [1] World Health Organization, “eHealth.” WHO.int. https://www.emro.who.int/health-topics/ehealth/ (accessed July. 1, 2023)
    [2] U.S. Food and Drug Administration, “What is Digital Health?” FDA.gov. https://www.fda.gov/medical-devices/digital-health-center-excellence/what-digital-health (accessed July. 1, 2023)
    [3] 中華民國國家發展委員會,“智慧醫療關鍵議題與對策,” NDC.gov. https://www.ndc.gov.tw/Content_List.aspx?n=9614A7C859796FFA (accessed July. 1, 2023)
    [4] 楊泮池, “以巨量資料和人工智慧驅動全齡精準健康,” AI and Next-Generation Communication, 2021.
    [5] R. J. Hay, N. E. Johns, H. C. Williams, I. W. Bolliger, R. P. Dellavalle, D. J. Margolis, R. Marks, L. Naldi, M. A. Weinstock, S. K. Wulf et al., “The global burden of skin disease in 2010: an analysis of the prevalence and impact of skin conditions”, Journal of Investigative Dermatology, Vol. 134, no. 6, 2014.
    [6] Roderick J. Hay, Nicole E. Johns, Hywel C. Williams, Ian W. Bolliger, Robert P. Dellavalle, David J. Margolis, Robin Marks, Luigi Naldi, Martin A. Weinstock, Sarah K. Wulf, Catherine Michaud, Christopher J.L. Murray, Mohsen Naghavi, “The Global Burden of Skin Disease in 2010: An Analysis of the Prevalence and Impact of Skin Conditions,” Jornal of Investigative Dermatology, Vol. 134, Isu. 6, pp. 1527-1534, 2014.
    [7] Vos T., Flaxman A.D., Naghavi M., “Years lived with disability (YLDs) for 1160 sequelae of 289 diseases and injuries 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010,” Lancet, pp. 2163-2196, 2012.
    [8] U. Gieler, T.Gieler, and JP. Kupfer, “Acne and quality of life – impact and management,” Journal of the European Academy of Dermatology and Venereology, Vol. 29, 2015.
    [9] Angelo Picardi, Ilaria Lega, Emanuele Tarolla, “Suicide risk in skin disorders,” Clinics in Dermatology, 2013.
    [10] Peter R.Hull,FFDerm, FRCPC, “Acne, Depression, and Suicide,” Dermatologic Clinics, 2005.
    [11] Zhang, Y., Jiang, H., Miura, Y., Manning, C., Langlotz, C., “Contrastive learning of medical visual representations from paired images and text,” arXiv preprint arXiv:2010.00747., 2020.
    [12] Krzysztof J. Geras, Ritse M. Mann, and Linda Moy, “Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives,” Radiology, 2019.
    [13] Hideaki Fujii, Takashi Yanagisawa, Masanori Mitsui, Yuri Murakami, Masahiro Yamaguchi, Nagaaki Ohyama, Tokiya Abe, Ikumi Yokoi, Yoshie Matsuoka, and Yasuo Kubota, “Extraction of acne lesion in acne patients from multispectral images,” 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 2008, pp. 4078-4081.
    [14] R. Ramli, A. S. Malik, A. F. M. Hani and F. B. -B. Yap, “Identification of acne lesions, scars and normal skin for acne vulgaris cases,” 2011 National Postgraduate Conference, Perak, Malaysia, 2011, pp. 1-4.
    [15] Jawad Huamyun and Aaamir Saeed Malik, “Multispectral and thermal images for acne vulgaris classification,” 2011 National Postgraduate Conference, Perak, Malaysia, 2011, pp. 1-4.
    [16] Zhao Liu and J. Zerubia, "Towards automatic acne detection using a MRF model with chromophore descriptors," 21st European Signal Processing Conference (EUSIPCO 2013), Marrakech, 2013, pp. 1-5.
    [17] Cynthia Hayat, “Enhanced K-Means Clustering Approach for Diagnosis Types of Acne,” 2021 2nd International Conference on Innovative and Creative Information Technology (ICITech), Salatiga, Indonesia, 2021, pp. 39-43.
    [18] Aamir Saeed Malik, Roshaslinie Ramli, Ahmad Fadzil M. Hani, Yasir Salih, Felix Boon-Bin Yap and Humaira Nisar, “Digital assessment of facial acne vulgaris,” 2014 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) Proceedings, Montevideo, Uruguay, 2014, pp. 546-550.
    [19] Nasim Alamdari, Kouhyar Tavakolian, Minhal Alhashim and Reza Fazel-Rezai, “Detection and classification of acne lesions in acne patients: A mobile application,” 2016 IEEE International Conference on Electro Information Technology (EIT), Grand Forks, ND, USA, 2016, pp. 0739-0743.
    [20] F.S. Abas, B. Kaffenber ger, J. Bikowski, M. Gürcan, “Acne image analysis: lesion localization and classification,” SPIE Medical Imaging, 2016.
    [21] Anif Hanifa Setianingrum, Siti Ummi Masruroh, and Syifa Fitratul M., “Performance of Acne Type Identification Using GLCM and SVM,” 2020 8th International Conference on Cyber and IT Service Management (CITSM), Pangkal, Indonesia, 2020, pp. 1-4.
    [22] Gregorius Satia Budhi, Rudy Adipranata, and Ari Gunawan, “Acne Segmentation and Classification using Region Growing and Self-Organizing Map,” 2017 International Conference on Soft Computing, Intelligent System and Information Technology (ICSIIT), Denpasar, Indonesia, 2017, pp. 78-83.
    [23] Gabriele Maroni, Michele Ermidoro, Fabio Previdi, Glauco Bigini, “Automated detection, extraction and counting of acne lesions for automatic evaluation and tracking of acne severity,” IEEE Symposium Series on Computational Intelligence,pp. 1-6, 2017.
    [24] T. Chantharaphaichi, B. Uyyanonvara, C. Sinthanayothin, A. Nishihara, Automatic, “Acne detection for medical treatment, International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES), pp. 1-6, 2015.
    [25] Xiaoping Wu, Ni Wen, Jie Liang, Yu-Kun Lai, Dongyu She, Ming-Ming Cheng, Jufeng Yang, “Joint acne image grading and counting via label distribution learning,” IEEE International Conference on Computer Vision(ICCV), 2019.
    [26] Elina Malgina, and Maria-Anastasia Kurochkina, “Development of the Mobile Application for Assessing Facial Acne Severity from Photos,” 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), St. Petersburg, Moscow, Russia, 2021, pp. 1790-1793.
    [27] Kyungseo Min, Gun-Hee Lee, and Seong-Whan Lee, “ACNet: Mask-Aware Attention with Dynamic Context Enhancement for Robust Acne Detection,” 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 2021, pp. 2724-2729.
    [28] Yi Lin, Yi Guan, Zhaoyang Ma, Haiyan You, Xue Cheng, Jingchi Jiang, “An Acne Grading Framework on Face Images via Skin Attention and SFNet,” 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 2021, pp. 2407-2414.
    [29] Nikhil Pancholi, Silky Goel, Rahul Nijhawan and Siddharth Gupta, “Classification and Detection of Acne on the Skin using Deep Learning Algorithms,” 2021 19th OITS International Conference on Information Technology (OCIT), Bhubaneswar, India, 2021, pp. 110-114.
    [30] Kruger, R.P. , Townes, J.R. , Hall, D.L. , Dwyer, S.J. , Lodwick, G.S., “1972. Automated ra- diographic diagnosis via feature extraction and classification of cardiac size and shape descriptors,” IEEE Trans. Biomed. Eng., pp. 174–186, 2019.
    [31] Sahiner, B. , Petrick, N. , Heang-Ping, C. , Hadjiiski, L.M. , Paramagul, C. , Helvie, M.A. , Gurcan, M.N., “Computer-aided characterization of mammographic masses: accuracy of mass segmentation and its effects on characterization,” IEEE Trans. Med. Imaging 20, pp. 1275–1284, 2001.
    [32] Shen, D. , Wu, G. , Suk, .H..-I., “Deep learning in medical image analysis,” Annu. Rev. Biomed. Eng. Vol. 19, pp. 221–248, 2017.
    [33] Litjens, G. , Kooi, T. , Bejnordi, B.E. , Setio, A .A .A . , Ciompi, F. , Ghafoorian, M. , Van Der Laak, J.A. , Van Ginneken, B. , Sánchez, C.I., “A survey on deep learning in medical image analysis,” Med. Image Anal. Vol. 42, pp. 60–88, 2017.
    [34] Bruijne, M., “Machine learning approaches in medical image analysis: from detection to diagnosis,” Med. Image Anal. Vol. 33, pp. 94–97, 2016.
    [35] Tajbakhsh, N. , Hu, Y. , Cao, J. , Yan, X. , Xiao, Y. , Lu, Y. , Liang, J. , Terzopoulos, D. , Ding, X. , “Surrogate supervision for medical image analysis: effective deep learning from limited quantities of labeled data,” IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1251–1255, 2019.
    [36] Vu, Y.N.T. , Wang, R. , Balachandar, N. , Liu, C. , Ng, A.Y. , Rajpurkar, P., “Medaug: contrastive learning leveraging patient metadata improves representations for chest x-ray interpretation. In: Proceeding of the Machine Learning for Health- care Conference,” PMLR, pp. 755–769, 2021.
    [37] Xie, Y. , Zhang, J. , Xia, Y., “Semi-supervised adversarial model for benign–ma- lignant lung nodule classification on chest CT,” Med. Image Anal., vol. 57, pp. 237–248, 2019.
    [38] Madani, A., Ong, J.R., Tibrewal, A., Mofrad, M.R.K. , “Deep echocardiography: data-efficient supervised and semi-supervised deep learning towards automated diagnosis of cardiac disease,” Digit. Med., Vol. 1, no. 59, 2018.
    [39] Taylor M, Gonzalez M, Porter R., “Pathways to inflammation: acne pathophysiology,” European Journal of Dermatology, Vol. 21, no. 3, pp. 323–33, 2019.
    [40] Fitzpatrick TB, “Fitzpatrick's Color Atlas and Synopsis of Clinical Dermatology,” New York: McGraw-Hill Medical Pub. Division, 2005.
    [41] Tara Bronsnick, Era Caterina Murzaku, Babar K. Rao,“Diet in dermatology: Part I. Atopic dermatitis, acne, and nonmelanoma skin cancer,” Journal of the American Academy of Dermatology, Vol. 71, Issue 6, p.p. 1039.e1-1039.e12., 2014.
    [42] R. Ramli, A. S. Malik, A. F. M. Hani, and A. Jamil, “Acne analysis, grading and computational assessment methods: an overview.” Skin Res. Technol., vol. 18, no. 1, pp. 1–14, 2012.
    [43] Burke BM, Cunliffe WJ. The assessment of acne vulgaris, “The Leeds technique. Br J Dermatol,” British Journal of Dermatology, Vol. 111, pp.83-92, 1984.
    [44] Cyril H. Cook,Rosemary L. Centner; and Scott E. Michaels, “An Acne Grading Method Using Photographic Standards,” Arch Dermatol, pp.571-575, 1979.
    [45] Anne W Lucky, Beth L Barber, Cynthia J Girman, Jody Williams, Joan Ratterman, Joanne Waldstreicher,“A multirater validation study to assess the reliability of acne lesion counting,” Journal of the American Academy of Dermatology, Vol. 35, Issue 4, pp. 559-565, 1996.
    [46] Amol Doshi, Ahmed Zaheer, Matthew J. Stiller, “A comparison of current acne grading systems and proposal of a novel system,” International Journal of Dermatology, Vol. 36, pp. 416-418, 1997.
    [47] Tutakne MA, Chari KVR, “Acne, rosacea and perioral dermatitis,” Valia RG, Valia AR, . IADVL Textbook and atlas of dermatology, pp. 689-710, 2003.
    [48] Bernardis E, Shou H, Barbieri JS, et al., “Development and Initial Validation of a Multidimensional Acne Global Grading System Integrating Primary Lesions and Secondary Changes,” JAMA Dermatol., vol. 156, no. 3, pp. 296–302, 2020.
    [49] Hazel H. Oon, MRCP, FAMS; Su-Ni Wong, MBBS, FRCP, FAMS; Derrick Chen Wee Aw, MBBS, MRCP, FAMS; Wai Kwong Cheong, MBBS, MRCP, FRCP; Chee Leok Goh, MD, MMed, FRCP, FAMS; and Hiok Hee Tan, “Acne Management Guidelines by the Dermatological Society of Singapore,” JACD, 2019.
    [50] Andrea L. Zaenglein, Arun L. Pathy, Bethanee J. Schlosser, Ali Alikhan, Hilary E. Baldwin, Diane S. Berson, Whitney P. Bowe, Emmy M. Graber, Julie C. Harper, Sewon Kang, Jonette E. Keri, James J. Leyden, Rachel V. Reynolds, Nanette B. Silverberg, Linda F. Stein Gold, Megha M. Tollefson, Jonathan S. Weiss, Nancy C. Dolan, Andrew A. Sagan, Mackenzie Stern Kevin M. Boyer, “Guidelines of care for the management of acne vulgaris,” Journal of the American Academy of Dermatology, Vol. 74, Isu. 5, pp. 945-973, 2016.
    [51] Thiboutot DM, Dréno B, Abanmi A, Alexis AF, Araviiskaia E, Barona Cabal MI, Bettoli V, Casintahan F, Chow S, da Costa A, El Ouazzani T, Goh CL, Gollnick HPM, Gomez M, Hayashi N, Herane MI, Honeyman J, Kang S, Kemeny L, Kubba R, Lambert J, Layton AM, Leyden JJ, López-Estebaranz JL, Noppakun N, Ochsendorf F, Oprica C, Orozco B, Perez M, Piquero-Martin J, See JA, Suh DH, Tan J, Lozada VT, Troielli P, Xiang LF, “Practical management of acne for clinicians: An international consensus from the Global Alliance to Improve Outcomes in Acne,” Journal of the American Academy of Dermatology, Vol. 78, Isu. 2, pp. S1-S23.e1., 2017.
    [52] Hayashi N, Akamatsu H, Iwatsuki K, Shimada-Omori R, Kaminaka C, Kurokawa I, Kono T, Kobayashi M, Tanioka M, Furukawa F, Furumura M, Yamasaki O, Yamasaki K, Yamamoto Y, Miyachi Y, Kawashima M., “Japanese Dermatological Association Guidelines: Guidelines for the treatment of acne vulgaris 2017,” The Journal of Dermatology,” Vol. 45, Isu. 8, pp.898-935, 2018.
    [53] Acne Group, Combination of Traditional and Western Medicine Dermatology; Acne Group, Chinese Society of Dermatology; Acne Group, Chinese Dermatologist Association; Acne group, Dermatology Committee, Chinese Non-government Medical Institutions Association, “International Journal of Dermatology and Venereology, pp.129-138, 2019.
    [54] Xiangli Yang, Zixing Song, Irwin King, Zenglin Xu, “A Survey on Deep Semi-supervised Learning,” IEEE Transactions on Knowledge and Data Engineering, 2022.
    [55] D.-H. Lee, “Pseudo-label: The simple and efficient semisupervised learning method for deep neural networks,” in Workshop on challenges in representation learning, ICML, vol. 3, no. 2, 2013.
    [56] Q. Xie, M. Luong, E. H. Hovy, and Q. V. Le, “Self-training with noisy student improves imagenet classification,” CVPR, pp. 10684–10695, 2020.
    [57] G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” CoRR, 2015.
    [58] L. Beyer, X. Zhai, A. Oliver, and A. Kolesnikov, “S4L: selfsupervised semi-supervised learning,” ICCV, pp. 1476–1485, 2019.
    [59] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” ICLR, 2018.
    [60] H. Pham, Q. Xie, Z. Dai, and Q. V. Le, “Meta pseudo labels,” CoRR, vol. 2003.10580, 2020.
    [61] X. Wang, D. Kihara, J. Luo, and G. Qi, “Enaet: A self-trained framework for semi-supervised and supervised learning with ensemble transformations,” IEEE Trans. Image Process., vol. 30, pp. 1639–1647, 2021.
    [62] T. Chen, S. Kornblith, K. Swersky, M. Norouzi, and G. E. Hinton, “Big self-supervised models are strong semi-supervised learners,” NeurIPS, 2020.
    [63] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko, “Semi-supervised learning with ladder networks,” NIPS, pp. 3546–3554, 2015.
    [64] M. Sajjadi, M. Javanmardi, and T. Tasdizen, “Regularization with stochastic transformations and perturbations for deep semisupervised learning,” NIPS, pp. 1163–1171, 2016.
    [65] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, Vol. 15, no. 1, pp. 1929–1958, 2014.
    [66] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen, “Regularization with stochastic transformations and perturbations for deep semi-supervised learning,” Advances in neural information processing systems, Vol. 29, pp. 1163–1171, 2016.
    [67] S. Laine and T. Aila, “Temporal ensembling for semi-supervised learning,” ICLR, 2017.
    [68] A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semisupervised deep learning results,” NIPS, pp. 1195–1204, 2017.
    [69] T. Miyato, S. Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: A regularization method for supervised and semi-supervised learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 8, pp. 1979–1993, 2019.
    [70] Z. Ke, D. Wang, Q. Yan, J. S. J. Ren, and R. W. H. Lau, “Dual student: Breaking the limits of the teacher in semi-supervised learning,” ICCV, pp. 6727–6735, 2019.
    [71] P. Izmailov, D. Podoprikhin, T. Garipov, D. P. Vetrov, and A. G. Wilson, “Averaging weights leads to wider optima and better generalization,” UAI. AUAI Press, pp. 876–885, 2018.
    [72] L. Zhang and G. Qi, “WCP: worst-case perturbations for semisupervised deep learning,” CVPR, pp. 3911–3920, 2020.
    [73] Q. Xie, Z. Dai, E. H. Hovy, T. Luong, and Q. Le, “Unsupervised data augmentation for consistency training,” NeurIPS, 2020.
    [74] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation strategies from data,” CVPR, pp. 113–123, 2019.
    [75] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical data augmentation with no separate search,” CoRR, vol. 1909.13719, 2019.
    [76] R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine translation models with monolingual data,” The Association for Computer Linguistics, 2016.
    [77] Samuli Laine and Timo Aila, “Temporal ensembling for semisupervised learning,” arXiv preprint arXiv:1610.02242, 2016.
    [78] Antti Tarvainen and Harri Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” arXiv preprint arXiv:1703.01780, 2017.
    [79] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel, “Mixmatch: A holistic approach to semi-supervised learning,” arXiv preprint arXiv:1905.02249, 2019.
    [80] David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel, “Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring,” arXiv preprint arXiv:1911.09785, 2019.
    [81] Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel, “Fixmatch: Simplifying semisupervised learning with consistency and confidence,” arXiv preprint arXiv:2001.07685, 2020.
    [82] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz,” mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017.
    [83] V. Verma, A. Lamb, J. Kannala, Y. Bengio, and D. Lopez-Paz, “Interpolation consistency training for semi-supervised learning,” IJCAI, pp. 3635–3641, 2019.
    [84] D. Berthelot, N. Carlini, I. J. Goodfellow, N. Papernot, A. Oliver, and C. Raffel, “Mixmatch: A holistic approach to semi-supervised learning,” NeurIPS, pp. 5050–5060, 2019.
    [85] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring,” ICLR, 2020.
    [86] J. Li, R. Socher, and S. C. H. Hoi, “Dividemix: Learning with noisy labels as semi-supervised learning,” ICLR, 2020.
    [87] K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. Raffel, E. D. Cubuk, A. Kurakin, and C. Li, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” NeurIPS, 2020.
    [88] B. Zhang, Y. Wang, W. Hou, H. Wu, J. Wang, M. Okumura, and T. Shinozaki, “Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling,” NeurIPS, pp. 18 408–18 419, 2021.
    [89] Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness, “Pseudo-labeling and confirmation bias in deep semi-supervised learning,” International Joint Conference on Neural Networks (IJCNN), pp. 1–8, 2020.
    [90] Yang, Y., Xu, Z., “Rethinking the value of labels for improving class-imbalanced learning,” Neural Information Processing Systems, 2020.
    [91] Gupta, A., Dollar, P., & Girshick, R., “LVIS: A dataset for large vocabulary instance segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
    [92] Buda, M., Maki, A., & Mazurowski, M. A., “A systematic study of the class imbalance problem in convolutional neural networks,” Neural Networks, Vol. 106, pp. 249–259, 2018.
    [93] Deli, C., Yankai, L., & Guangxiang, Z., et al., “Topology-imbalance learning for semi-supervised node classifcation,” NeurIPS, 2021.
    [94] Xu, Z., Chai, Z., & Yuan, C., “Towards calibrated model for long-tailed visual recognition from prior perspective,” NeurIPS, 2021.
    [95] Park, S., Lim, J., Jeon, Y., et al., “Infuence-balanced loss for imbalanced visual classifcation,” IEEE/CVF International Conference on Computer Vision (ICCV), pp. 735–744, 2021.
    [96] Chawla, N. V., Bowyer, K. W., Hall, L. O., et al., “SMOTE: synthetic minority over-sampling technique,” Journal of Artifcial Intelligence Research, Vol. 16, pp. 321–357, 2002.
    [97] Ren, J., Yu, C., & Sheng, S., et al., “Balanced meta-softmax for long-tailed visual recognition,” NeurIPS 2020.
    [98] Jamal, M. A., Brown, M., & Yang, M., et al., “Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
    [99] Zhong, Z., Cui, J., & Liu, S., et al., “Improving calibration for long-tailed recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
    [100] Yang, Y., Xu, Z., “Rethinking the value of labels for improving class-imbalanced learning,” NeurIPS, 2020.
    [101] Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin, “Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning,” Neural Information Processing Systems(NIPS), 2020.
    [102] Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, and Fan Yang, “Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
    [103] Kim, J., Hur, Y., & Park, S., et al., “Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning,” NeurIPS, 2020.
    [104] Wei, C., Sohn, K., & Mellina, C., et al., “Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning,” CVPR, 2021.
    [105] Berthelot, D., Carlini, N., & Cubuk, E. D., et al., “Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring,” ICLR, 2020.
    [106] Oh, Y., Kim, D. J., & Kweon, I. S., “Distribution-aware semantics-oriented pseudo-label for imbalanced semi-supervised learning,” CoRR, 2021.
    [107] Guo, L., & Li, Y., “Class-imbalanced semi-supervised learning with adaptive thresholding,” International Conference on Machine Learning, vol. 162, pp. 8082–8094, 2022.
    [108] Lee, H., Shin, S., & Kim, H., “ABC: auxiliary balanced classifier for class-imbalanced semi-supervised learning,” NeurIPS, 2021.
    [109] Fan, Y., Dai, D., Kukleva, A., et al., “Cossl: Co-learning of representation and classifier for imbalanced semi-supervised learning,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14554–14564, 2022.
    [110] E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, and Q. V. Le, “AutoAugment: Learning augmentation strategies from data,” IEEE/CVF Conf. Comput. Vis. Pattern Recognit (CVPR), pp. 113–123, 2019.
    [111] D. Ho, E. Liang, X. Chen, I. Stoica, and P. Abbeel, “Population based augmentation: Efficient learning of augmentation policy schedules,” ICML, vol. 97 , pp. 2731–2741, 2019.
    [112] S. Lim, I. Kim, T. Kim, C. Kim, and S. Kim, “Fast AutoAugment,” NeurIPS, pp. 6662–6672, 2019.
    [113] R. Hataya, J. Zdenek, K. Yoshizoe, and H. Nakayama, “Faster autoaugment: Learning augmentation strategies using backpropagation,” ECCV, vol. 12370, pp. 1–16, 2020.
    [114] Y. Li, G. Hu, Y. Wang, T. M. Hospedales, N. M. Robertson, and Y. Yang, “DADA: Differentiable automatic data augmentation,” ECCV, pp. 1–16, 2020.
    [115] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 3008–3017, 2020.
    [116] K. Tian, C. Lin, M. Sun, L. Zhou, J. Yan, and W. Ouyang, “Improving auto-augment via augmentation-wise weight sharing,” NeurIPS, pp. 1–14, 2020.
    [117] F. Zhou et al., “MetaAugment: Sample-aware data augmentation policy learning,” AAAI Conf. Artif. Intell., vol. 35, no. 12, pp. 11097–11105, 2021.
    [118] Y. Zheng, Z. Zhang, S. Yan, and M. Zhang, “Deep AutoAugment,” ICLR, pp. 1–16, 2022.
    [119] T. H. Cheung and D. Y. Yeung, “AdaAug: Learning class- and instanceadaptive data augmentation policies,” ICLR, pp. 1–16, 2022.
    [120] Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, Muhammad Shahzad, Wen Yang, Richard Bamler, Xiao Xiang Zhu, “A Survey of Uncertainty in Deep Neural Networks,” 2022.
    [121] A. G. Roy, S. Conjeti, N. Navab, C. Wachinger, A. D. N. Initiative et al., “Bayesian quicknat: Model uncertainty in deep whole-brain segmentation for structure-wise quality control,” NeuroImage, vol. 195, pp. 11–22, 2019.
    [122] K. Lee, H. Lee, K. Lee, and J. Shin, “Training confidence-calibrated classifiers for detecting out-of-distribution samples,” International Conference on Learning Representations, 2018.
    [123] J. Mitros and B. Mac Namee, “On the validity of bayesian neural networks for uncertainty estimation,” arXiv preprint arXiv:1912.01530, 2019.
    [124] Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. Dillon, B. Lakshminarayanan, and J. Snoek, “Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift,” Neural Information Processing Systems, pp. 13991–14002, 2019.
    [125] M. S. Ayhan and P. Berens, “Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks,” Medical Imaging with Deep Learning Conference, 2018.
    [126] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” International Conference on Machine Learning, pp. 1321–1330, 2017.
    [127] A. G.Wilson and P. Izmailov, “Bayesian deep learning and a probabilistic perspective of generalization,” Neural Information Processing Systems, vol. 33, pp. 4697–4708, 2020.
    [128] M. Rawat, M. Wistuba, and M.-I. Nicolae, “Harnessing model uncertainty for detecting adversarial examples,” NIPS Workshop on Bayesian Deep Learning, 2017.
    [129] A. C. Serban, E. Poll, and J. Visser, “Adversarial examples-a complete characterisation of the phenomenon,” arXiv preprint arXiv:1810.01185, 2018.
    [130] L. Smith and Y. Gal, “Understanding measures of uncertainty for adversarial example detection,” Conference on Uncertainty in Artificial Intelligence, pp. 560–569, 2018.
    [131] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,” International Conference on International Conference on Machine Learning, Vol. 37, pp. 1613–1622, 2015.
    [132] Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation:Representing model uncertainty in deep learning,” in international conference on machine learning, pp. 1050–1059, 2016.
    [133] A. Amini, A. Soleimany, S. Karaman, and D. Rus, “Spatial uncertainty sampling for end-to-end control,” arXiv preprint arXiv:1805.04829, 2018.
    [134] A. Mobiny, H. V. Nguyen, S. Moulik, N. Garg, and C. C. Wu, “Dropconnect is effective in modeling uncertainty of bayesian deep networks,” arXiv preprint arXiv:1906.04569, 2019.
    [135] D. Krueger, C.-W. Huang, R. Islam, R. Turner, A. Lacoste, and A. Courville, “Bayesian hypernetworks,” arXiv preprint arXiv:1710.04759, 2017.
    [136] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” Neural information processing systems, pp. 6402–6413, 2017.
    [137] M. Valdenegro-Toro, “Deep sub-ensembles for fast uncertainty estimation in image classification,” Bayesian Deep Learning Workshop at Neural Information Processing Systems, 2019.
    [138] Y. Wen, D. Tran, and J. Ba, “Batchensemble: an alternative approach to efficient ensemble and lifelong learning,” International Conference on Learning Representations, 2020.
    [139] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, pp. 1–48, 2019.
    [140] Q. Wen, L. Sun, X. Song, J. Gao, X. Wang, and H. Xu, “Time series data augmentation for deep learning: A survey,” arXiv preprint arXiv:2002.12478, 2020.
    [141] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint arXiv:1806.01261, 2018.
    [142] M. S. Ayhan, L. Kuehlewein, G. Aliyeva, W. Inhoffen, F. Ziemssen, and P. Berens, “Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection,” Medical Image Analysis, p. 101724, 2020.
    [143] Abhishek Chaurasia and Eugenio Culurciello, “LinkNet: Exploiting encoder representations for efficient semantic segmentation,” 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 2017, pp. 1-4.
    [144] G. Pereyra, G. Tucker, J. Chorowski, Ł. Kaiser, and G. Hinton, “Regularizing neural networks by penalizing confident output distributions,” arXiv preprint arXiv:1701.06548, 2017.
    [145] F. Wang, X. Xiang, J. Cheng, and A. L. Yuille, “Normface: L2 hypersphere embedding for face verification,” ACM international conference on Multimedia, 2017.
    [146] L. Ju et al., “Improving Medical Images Classification With Label Noise Using Dual-Uncertainty Estimation,” IEEE Transactions on Medical Imaging, vol. 41, no. 6, pp. 1533-1546, 2022.
    [147] M. Raghu et al., “Direct uncertainty prediction for medical second opinions,” Int. Conf. Mach. Learn, pp. 5281–5290, 2019.
    [148] F. L. Wauthier and M. I. Jordan, “Bayesian bias mitigation for crowdsourcing,” Neural Inf. Process. Syst., pp. 1800–1808, 2011.
    [149] Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation:Representing model uncertainty in deep learning,” Int. Conf. Mach. Learn., pp. 1050–1059, 2016.
    [150] A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?” Neural Inf. Process. Syst., vol. 30, pp. 5574–5584, 2017.
    [151] John D’Orazio, Stuart Jarrett, Alexandra Amaro-Ortiz, and Timothy Scott, “UV Radiation and the Skin,” Molecular Sciences, Vol. 14, pp. 12222-12248, 2013.
    [152] Jacob Cohen, “A Coefficient of Agreement for Nominal Scales,” Educational and Psychological Measurement, Vol. 20, pp. 37–46, 1960.
    [153] Fleiss, J. L., “Measuring nominal scale agreement among many raters,” Psychological Bulletin, Vol. 76, No. 5 pp. 378–382, 1971.
    [154] Huggingface, “pytorch-image-models,” GITHUB.com. https://github.com/huggingface/pytorch-image-models (accessed Jan. 22, 2023)
    [155] Nasir6, “face-segmentation,” GITHUB.com. https://github.com/nasir6/face-segmentation (accessed Mar. 01, 2022)
    [156] Bin-Bin Gao, Chao Xing, Chen-Wei Xie, Jianxin Wu, and Xin Geng, “Deep label distribution learning with label ambiguity,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2825–2838, 2017.
    [157] Navneet Dalal and Bill Triggs, “Histograms of oriented gradients for human detection,” CVPR, 2005.
    [158] David G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
    [159] Rajiv Mehrotra, Kameswara Rao Namuduri, and Nagarajan Ranganathan, “Gabor filter-based edge detection,” Pattern Recognition, vol. 25, no. 12, pp. 1479–1494, 1992.
    [160] Michael J. Swain and Dana H. Ballard, “Color indexing,” International Journal of Computer Vision, vol. 7, no. 1, pp. 11–32, 1991.
    [161] Xiaoping Wu, Ni Wen, Jie Liang, Yu-Kun Lai, Dongyu She, Ming-Ming Cheng, Jufeng Yang, “Joint Acne Image Grading and Counting via Label Distribution Learning,” ICCV, 2019.
    [162] Yi Lin, Jingchi Jiang, Zhaoyang Ma, Dongxin Chen, Yi Guan, Haiyan You, Xue Cheng, Bingmei Liu, Gongning Luo, “KIEGLFN: A unified acne grading framework on face images,” Computer Methods and Programs in Biomedicine, 2022.
    [163] Kelly, C.J., Karthikesalingam, A., Suleyman, M. , Corrado, G. , King, D., “Key chal- lenges for delivering clinical impact with artificial intelligence,” BMC Med., vol. 17, pp. 1–9, 2019.
    [164] Wynants, L. , Van Calster, B. , Collins, G.S. , Riley, R.D. , Heinze, G. , Schuit, E. , Bon- ten, M.M.J. , Dahly, D.L. , Damen, J.A. , Debray, T.P.A. , de Jong, V.M.T. , De Vos, M. , Dhiman, P. , Haller, M.C. , Harhay, M.O. , Henckaerts, L. , Heus, P. , Kammer, M. , Kreuzberger, N. , Lohmann, A. , Luijken, K. , Ma, J. , Martin, G.P. , McLernon, D.J. , Andaur Navarro, C.L. , Reitsma, J.B. , Sergeant, J.C. , Shi, C. , Skoetz, N. , Smits, L.J.M. , Snell, K.I.E. , Sperrin, M. , Spijker, R. , Steyerberg, E.W. , Takada, T. , Tzoulaki, I. , van Kuijk, S.M.J. , van Bussel, B.C.T. , van der Horst, I.C.C. , van Royen, F.S. , Verbakel, J.Y. , Wallisch, C. , Wilkinson, J. , Wolff, R. , Hooft, L. , Moons, K.G.M. , van Smeden, M., “Prediction models for diagnosis and prognosis of covid-19: systematic re- view and critical appraisal,” BMJ, vol. 369, 2020.
    [165] Roberts, M. , Driggs, D. , Thorpe, M. , Gilbey, J. , Yeung, M. , Ursprung, S. , Aviles-Rivero, A.I. , Etmann, C. , McCague, C. , Beer, L., “Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans,” Nat. Mach. Intell. Vol. 3, pp. 199–217, 2021.

    無法下載圖示 全文公開日期 2025/08/17 (校內網路)
    全文公開日期 2025/08/17 (校外網路)
    全文公開日期 2025/08/17 (國家圖書館:臺灣博碩士論文系統)
    QR CODE