簡易檢索 / 詳目顯示

研究生: 林冠宇
Kuan-Yu, Lin
論文名稱: 彈性FCN 深度學習方法應用於顯微影像分析
Soft Label Fully Convolutional Network in Application to Microscopic Image Analysis
指導教授: 王靖維
Ching-Wei, Wang
口試委員: 趙載光
Tai-Kuang Chao
鄭智嘉
Chih-Chia Cheng
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 醫學工程研究所
Graduate Institute of Biomedical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 65
中文關鍵詞: 第二型人類表皮生長因子受體(HER2)螢光原位雜交法(FISH)雙色原位雜交法(DISH)轉移性乳癌soft label 深度學習
外文關鍵詞: HER2 overexpression, Fluorescence in situ hybridization, Brightfield dual in situ hybridization, Metastatic breast cancer, Soft label Deep learning
相關次數: 點閱:199下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • 目錄 摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III 致謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI 目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII 圖目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX 表目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X 第一章緒論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 研究目標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 論文貢獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 第二章研究背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 樣本製備及染色方法. . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 螢光原位雜交法(Fluorescence in situ hybridization, FISH) . . . 5 2.1.2 雙色原位雜交法(Dual in situ hybridization,DISH) . . . . . . . . 5 2.1.3 甲狀腺癌之病理全域影像(Whole slide image, WSI) . . . . . . 7 2.2 研究相關方法應用之文獻. . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 彈性標籤(soft label) 技術. . . . . . . . . . . . . . . . . . . . . 8 2.2.2 標籤平滑化(label smoothing) 方法. . . . . . . . . . . . . . . . 8 2.2.3 醫學影像深度學習相關方法. . . . . . . . . . . . . . . . . . . 9 第三章研究方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1 彈性標籤(soft label) 影像處理. . . . . . . . . . . . . . . . . . . . . . 13 3.2 soft weight softmax 損失函數. . . . . . . . . . . . . . . . . . . . . . . 17 3.3 soft label FCN(SL-FCN) 模型架構. . . . . . . . . . . . . . . . . . . . 18 3.4 其餘相關配置. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 第四章實驗設計與結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1 實驗資料集介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.2 實驗設備介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.3 數據分析方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3.1 DISH breast dataset 1 之數據及影像結果分析. . . . . . . . . . 25 4.3.2 DISH breast dataset 2 之數據及影像結果分析. . . . . . . . . . 30 4.3.3 FISH breast dataset 之數據及影像結果分析. . . . . . . . . . . 34 4.3.4 Thyroid cancer WSI dataset 之數據及影像結果分析. . . . . . . 38 4.3.5 消融實驗(ablation study) . . . . . . . . . . . . . . . . . . . . . 42 第五章結論與未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.1 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.2 未來發展. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    [1] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global
    cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for
    36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol. 68, no. 6,
    pp. 394–424, 2018.
    [2] C. L. Vogel, M. A. Cobleigh, D. Tripathy, J. C. Gutheil, L. N. Harris, L. Fehrenbacher,
    D. J. Slamon, M. Murphy, W. F. Novotny, M. Burchmore, S. Shak, S. J.
    Stewart, and M. Press, “Efficacy and safety of trastuzumab as a single agent in firstline
    treatment of HER2-overexpressing metastatic breast cancer,” J. Clin. Oncol.,
    vol. 20, no. 3, pp. 719–726, 2002.
    [3] M. J. Piccart-Gebhart, M. Procter, B. Leyland-Jones, A. Goldhirsch, M. Untch,
    I. Smith, L. Gianni, J. Baselga, R. Bell, C. Jackisch, et al., “Trastuzumab after
    adjuvant chemotherapy in her2-positive breast cancer,” New England Journal of
    Medicine, vol. 353, no. 16, pp. 1659–1672, 2005.
    [4] B. Kaufman, M. Trudeau, A. Awada, K. Blackwell, T. Bachelot, V. Salazar, M. De-
    Silvio, R. Westlund, T. Zaks, N. Spector, and S. Johnston, “Lapatinib monotherapy
    in patients with HER2-overexpressing relapsed or refractory inflammatory breast
    cancer: final results and survival of the expanded HER2+ cohort in EGF103009, a
    phase II study,” Lancet Oncol., vol. 10, no. 6, pp. 581–588, 2009.
    [5] A. Emde, W. J. Köstler, Y. Yarden, and Association of Radiotherapy and Oncology
    of the Mediterranean arEa (AROME), “Therapeutic strategies and mechanisms of
    tumorigenesis of HER2-overexpressing breast cancer,” Crit. Rev. Oncol. Hematol.,
    vol. 84 Suppl 1, pp. e49–57, 2012.
    [6] T. Hilal and E. H. Romond, “ERBB2 (HER2) testing in breast cancer,” JAMA,
    vol. 315, no. 12, pp. 1280–1281, 2016.
    [7] S. Kunte, J. Abraham, and A. J. Montero, “Novel her2–targeted therapies for her2–
    positive metastatic breast cancer,” Cancer, vol. 126, no. 19, pp. 4278–4288, 2020.
    [8] M. F. Press, J. A. Seoane, C. Curtis, E. Quinaux, R. Guzman, G. Sauter, W. Eiermann,
    J. R. Mackey, N. Robert, T. Pienkowski, et al., “Assessment of erbb2/her2 status in
    her2-equivocal breast cancers by fish and 2013/2014 asco-cap guidelines,” JAMA
    oncology, vol. 5, no. 3, pp. 366–375, 2019.
    [9] S. Agersborg, C. Mixon, T. Nguyen, S. Aithal, S. Sudarsanam, F. Blocker, L. Weiss,
    R. Gasparini, S. Jiang, W. Chen, G. Hess, and M. Albitar, “Immunohistochemistry
    and alternative FISH testing in breast cancer with HER2 equivocal amplification,”
    Breast Cancer Res. Treat., vol. 170, no. 2, pp. 321–328, 2018.
    [10] M. Edelweiss, A. P. M. Sebastiao, H. Oen, M. Kracun, R. Serrette, and D. S. Ross,
    “Her2 assessment by bright-field dual in situ hybridization in cell blocks of recurrent
    and metastatic breast carcinoma,” Cancer cytopathology, vol. 127, no. 11, pp. 684–
    690, 2019.
    [11] M. Troxell, R. K. Sibley, R. B. West, G. R. Bean, and K. H. Allison, “HER2 dual
    in situ hybridization: Correlations and cautions,” Arch. Pathol. Lab. Med., vol. 144,
    no. 12, pp. 1525–1534, 2020.
    [12] Z.-H. Liu, K. Wang, D.-Y. Lin, J. Xu, J. Chen, X.-Y. Long, Y. Ge, X.-L. Luo, K.-P.
    Zhang, Y.-H. Liu, et al., “Impact of the updated 2018 asco/cap guidelines on her2
    fish testing in invasive breast cancer: a retrospective study of her2 fish results of
    2233 cases,” Breast cancer research and treatment, vol. 175, no. 1, pp. 51–57, 2019.
    [13] D. J. Slamon, B. Leyland-Jones, S. Shak, H. Fuchs, V. Paton, A. Bajamonde,
    T. Fleming, W. Eiermann, J. Wolter, M. Pegram, et al., “Use of chemotherapy plus
    a monoclonal antibody against her2 for metastatic breast cancer that overexpresses
    her2,” New England journal of medicine, vol. 344, no. 11, pp. 783–792, 2001.
    [14] H. J. Burstein, L. N. Harris, P. K. Marcom, R. Lambert-Falls, K. Havlin, B. Overmoyer,
    R. J. Friedlander Jr, J. Gargiulo, R. Strenger, C. L. Vogel, et al., “Trastuzumab
    and vinorelbine as first-line therapy for her2-overexpressing metastatic breast cancer:
    multicenter phase ii trial with clinical outcomes, analysis of serum tumor markers
    as predictive factors, and cardiac surveillance algorithm,” Journal of clinical
    oncology, vol. 21, no. 15, pp. 2889–2895, 2003.
    [15] K.-H. Yu, A. L. Beam, and I. S. Kohane, “Artificial intelligence in healthcare,” Nature
    biomedical engineering, vol. 2, no. 10, pp. 719–731, 2018.
    [16] F. Zakrzewski, W. de Back, M. Weigert, T. Wenke, S. Zeugner, R. Mantey, C. Sperling,
    K. Friedrich, I. Roeder, D. Aust, et al., “Automated detection of the her2 gene
    amplification status in fluorescence in situ hybridization images for the diagnostics
    of cancer tissues,” Scientific reports, vol. 9, no. 1, pp. 1–12, 2019.
    [17] K. Bera, K. A. Schalper, D. L. Rimm, V. Velcheti, and A. Madabhushi, “Artificial
    intelligence in digital pathology—new tools for diagnosis and precision oncology,”
    Nature reviews Clinical oncology, vol. 16, no. 11, pp. 703–715, 2019.
    [18] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553,
    pp. 436–444, 2015.
    [19] E. Kats, J. Goldberger, and H. Greenspan, “A soft staple algorithm combined with
    anatomical knowledge,” in International Conference on Medical Image Computing
    and Computer-Assisted Intervention, pp. 510–517, Springer, 2019.
    [20] C. Gros, A. Lemay, and J. Cohen-Adad, “Softseg: Advantages of soft versus binary
    training for image segmentation,” Medical Image Analysis, vol. 71, p. 102038, 2021.
    [21] R. Müller, S. Kornblith, and G. Hinton, “When does label smoothing help?,” in Proceedings
    of the 33rd International Conference on Neural Information Processing
    Systems, Curran Associates Inc., 2019.
    [22] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for
    biomedical image segmentation,” in Medical Image Computing and Computer-
    Assisted Intervention – MICCAI 2015 (N. Navab, J. Hornegger, W. M. Wells, and
    A. F. Frangi, eds.), (Cham), pp. 234–241, Springer International Publishing, 2015.
    [23] C.-B. Zhang, P.-T. Jiang, Q. Hou, Y. Wei, Q. Han, Z. Li, and M.-M. Cheng, “Delving
    deep into label smoothing,” IEEE Transactions on Image Processing, vol. 30,
    pp. 5984–5996, 2021.
    [24] A. Van Engelen, W. Niessen, S. Klein, H. Verhagen, H. Groen, J. Wentzel, A. Lugt,
    and M. de Bruijne, “Supervised in-vivo plaque characterization incorporating class
    label uncertainty,” 2012.
    [25] T. T. de Weert, M. Ouhlous, E. Meijering, P. E. Zondervan, J. M. Hendriks, M. R.
    van Sambeek, D. W. Dippel, and A. van der Lugt, “In vivo characterization and
    quantification of atherosclerotic carotid plaque components with multidetector computed tomography and histopathological correlation,” Arteriosclerosis, thrombosis,
    and vascular biology, vol. 26, no. 10, pp. 2366–2372, 2006.
    [26] L. Qi, L. Wang, J. Huo, Y. Shi, and Y. Gao, “Progressive cross-camera soft-label
    learning for semi-supervised person re-identification,” IEEE Transactions on Circuits
    and Systems for Video Technology, vol. 30, no. 9, pp. 2815–2829, 2020.
    [27] S. Warfield, K. Zou, and W. Wells, “Simultaneous truth and performance level estimation
    (staple): an algorithm for the validation of image segmentation,” IEEE Transactions
    on Medical Imaging, vol. 23, no. 7, pp. 903–921, 2004.
    [28] H. Li, D. Wei, S. Cao, K. Ma, L. Wang, and Y. Zheng, “Superpixel-guided label
    softening for medical image segmentation,” 2020.
    [29] H. H. Pham, T. T. Le, D. Q. Tran, D. T. Ngo, and H. Q. Nguyen, “Interpreting chest xrays
    via cnns that exploit hierarchical disease dependencies and uncertainty labels,”
    Neurocomputing, vol. 437, pp. 186–194, 2021.
    [30] H.-H. Zhao, P. L. Rosin, Y.-K. Lai, and Y.-N. Wang, “Automatic semantic style transfer
    using deep convolutional neural networks and soft masks,” The Visual Computer,
    vol. 36, no. 7, pp. 1307–1324, 2020.
    [31] J. Chorowski and N. Jaitly, “Towards better decoding and language model integration
    in sequence to sequence models,” arXiv preprint arXiv:1612.02695, 2016.
    [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser,
    and I. Polosukhin, “Attention is all you need,” Advances in neural information processing
    systems, vol. 30, 2017.
    [33] H. J.Kuijf and E. Bennink, “Grand challenge on mr brain segmentation at miccai
    2018,”
    [34] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-100 (canadian institute for advanced
    research),”
    [35] Y.-J. Lin, T.-K. Chao, M.-A. Khalil, Y.-C. Lee, D.-Z. Hong, J.-J. Wu, and C.-W.
    Wang, “Deep learning fast screening approach on cytological whole slides for thyroid
    cancer diagnosis,” Cancers, vol. 13, no. 15, p. 3891, 2021.
    [36] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic
    segmentation,” IEEE transactions on pattern analysis and machine intelligence,
    vol. 39, no. 4, pp. 640–651, 2017.
    [37] X. Wang, R. Zhang, T. Kong, L. Li, and C. Shen, “Solov2: Dynamic and fast instance
    segmentation,” Advances in Neural information processing systems, vol. 33,
    pp. 17721–17732, 2020.
    [38] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with
    atrous separable convolution for semantic image segmentation,” in Proceedings of
    the European conference on computer vision (ECCV), pp. 801–818, 2018.
    [39] C.-W. Wang, Y.-A. Liou, Y.-J. Lin, C.-C. Chang, P.-H. Chu, Y.-C. Lee, C.-H. Wang,
    and T.-K. Chao, “Artificial intelligence-assisted fast screening cervical high grade
    squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment
    planning,” Scientific Reports, vol. 11, no. 1, pp. 1–14, 2021.
    [40] C.-W. Wang, M.-A. Khalil, Y.-J. Lin, Y.-C. Lee, T.-W. Huang, and T.-K. Chao,
    “Deep learning using endobronchial-ultrasound-guided transbronchial needle aspiration
    image to improve the overall diagnostic yield of sampling mediastinal lymphadenopathy,”
    Diagnostics, vol. 12, p. 2234, 2022.
    [41] M.-A. Khalil, Y.-C. Lee, H.-C. Lien, Y.-M. Jeng, and C.-W. Wang, “Fast segmentation
    of metastatic foci in h&e whole-slide images for breast cancer diagnosis,”
    Diagnostics, vol. 12, no. 4, p. 990, 2022.
    [42] C.-W. Wang, Y.-C. Lee, C.-C. Chang, Y.-J. Lin, Y.-A. Liou, P.-C. Hsu, C.-C. Chang,
    A.-K.-O. Sai, C.-H. Wang, and T.-K. Chao, “A weakly supervised deep learning
    method for guiding ovarian cancer treatment and identifying an effective biomarker,”
    Cancers, vol. 14, no. 7, p. 1651, 2022.
    [43] C.-W. Wang, C.-C. Chang, Y.-C. Lee, Y.-J. Lin, S.-C. Lo, P.-C. Hsu, Y.-A. Liou,
    C.-H. Wang, and T.-K. Chao, “Weakly supervised deep learning for prediction of
    treatment effectiveness on ovarian cancer from histopathology images,” Computerized
    Medical Imaging and Graphics, vol. 99, p. 102093, 2022.
    [44] J. Shen, T. Li, C. Hu, H. He, D. Jiang, and J. Liu, “An augmented cell segmentation in
    fluorescent in situ hybridization images,” in 2019 41st Annual International Conference
    of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6306–
    6309, IEEE, 2019.
    [45] J. Shen, T. Li, C. Hu, H. He, and J. Liu, “Automatic cell segmentation using
    mini-u-net on fluorescence in situ hybridization images,” in Medical Imaging 2019:
    Computer-Aided Diagnosis, vol. 10950, pp. 721–727, SPIE, 2019.
    [46] E. Upschulte, S. Harmeling, K. Amunts, and T. Dickscheid, “Contour proposal networks for biomedical instance segmentation,” Medical Image Analysis, p. 102371,
    2022.
    [47] V. Ljosa, K. L. Sokolnicki, and A. E. Carpenter, “Annotated high-throughput microscopy image sets for validation.,” Nature methods, vol. 9, no. 7, pp. 637–637,
    2012.
    [48] L. Ke, Y.-W. Tai, and C.-K. Tang, “Deep occlusion-aware instance segmentation with
    overlapping bilayers,” in Proceedings of the IEEE/CVF Conference on Computer
    Vision and Pattern Recognition (CVPR), pp. 4019–4028, 2021.
    [49] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on
    computer vision, pp. 740–755, Springer, 2014.
    [50] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke,
    S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE conference on computer vision and pattern
    recognition, pp. 3213–3223, 2016.
    [51] T. Falk, D. Mai, R. Bensch, Ö. Çiçek, A. Abdulkadir, Y. Marrakchi, A. Böhm,
    J. Deubner, Z. Jäckel, K. Seiwald, et al., “U-net: deep learning for cell counting,
    detection, and morphometry,” Nature methods, vol. 16, no. 1, pp. 67–70, 2019.
    [52] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-first AAAI conference
    on artificial intelligence, 2017.
    [53] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition,
    pp. 770–778, 2016.
    [54] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional
    encoder-decoder architecture for image segmentation,” IEEE transactions on pattern
    analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
    [55] F. Jubayer, J. A. Soeb, A. N. Mojumder, M. K. Paul, P. Barua, S. Kayshar, S. S. Akter, M. Rahman, and A. Islam, “Detection of mold on the food surface using yolov5,”
    Current Research in Food Science, vol. 4, pp. 724–728, 2021.
    [56] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto,
    and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile
    vision applications,” arXiv preprint arXiv:1704.04861, 2017.
    [57] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in
    Proceedings of the IEEE conference on computer vision and pattern recognition,
    pp. 1251–1258, 2017.
    [58] S. Liu and W. Deng, “Very deep convolutional neural network based image classification
    using small training sample size,” in 2015 3rd IAPR Asian Conference on
    Pattern Recognition (ACPR), pp. 730–734, 2015.

    無法下載圖示 全文公開日期 2025/09/26 (校內網路)
    全文公開日期 2027/09/26 (校外網路)
    全文公開日期 2027/09/26 (國家圖書館:臺灣博碩士論文系統)
    QR CODE