簡易檢索 / 詳目顯示

研究生: 潘立凱
Li-Kai Pan
論文名稱: 卷積神經網路於醫學影像分割之應用
Application of Convolutional Neural Network to Medical Image Segmentation
指導教授: 徐勝均
Sendren Sheng-Dong Xu
口試委員: 陳金聖
Chin-Sheng Chen
徐勝均
Sendren Sheng-Dong Xu
柯正浩
Kevin Cheng-Hao Ko
學位類別: 碩士
Master
系所名稱: 工程學院 - 自動化及控制研究所
Graduate Institute of Automation and Control
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 83
中文關鍵詞: 醫學影像分割卷積神經網路上下文特徵訊息全局特徵空間注意力引導模塊尺度感知Inception融合模塊深度監督
外文關鍵詞: Medical Image Segmentation, Convolutional Neural Network, Context Feature Information, Global Feature Spatial Attention Guidance Modul, Scale-Aware Inception Fusion Module, Deep Supervision
相關次數: 點閱:373下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於人工智慧(Artificial Intelligence, AI)和深度學習(Deep Learning, DL)的興起,許多應用卷積神經網路(Convolutional Neural Network, CNN)於醫學影像分割的研究日益受到重視。但是,其中仍存一些問題待解決,如:在分割器官或病變目標的位置、形狀、尺度變化很大及邊界模糊。這也表示,基於卷積神經網路的影像分割方法,其上下文特徵訊息提取能力不足的問題仍有待改進。在本論文中,我們在卷積神經網路架構之上,提出兩個網路模塊。首先,在編碼器端(Encoder)和解碼器端(Decoder)之間引入了多個全局特徵空間注意力引導模塊(Global Feature Spatial Attention Guidance Module, GFSAG Module),旨在藉著跳躍連接,為解碼器提供不同階段(Stage)的全局上下文特徵訊息。更進一步地,我們提出了一個尺度感知Inception融合模塊(Scale-Aware Inception Fusion Module, SIF Module)來融合不同階段的上下文特徵訊息。此外,我們在解碼器端引入深度監督(Deep Supervision)機制,在上取樣時使用輔助損失函數,以提高分割性能。在實驗的部分,我們採用兩個醫學影像分割的資料集:(1) 處於危險狀態的胸部器官CT (Computed Tomography)影像分割資料集(Segmentation of Thoracic Organs at Risk in CT Images Dataset, SegTHOR Dataset),和(2) 肝臟腫瘤分割資料集(Liver Tumor Segmentation Dataset, LiTS Dataset)。實驗結果顯示本研究所提出的方法優於其他分割方法的效能。


    Due to the rise of Artificial Intelligence (AI) and Deep Learning (DL), a lot of research concerning medical image segmentation based on the application of Convolutional Neural Network (CNN) has attracted more and more attention. However, there are still some problems to be solved, such as large changes in the position, shape, scale, and blurred boundaries of the segmented organ or lesion target. This also means that the image segmentation methods based on CNN have insufficient ability to extract contextual feature information, and thus they still need to be improved. In this thesis, we propose two network modules based on CNN architecture. First, Global Feature Spatial Attention Guidance (GFSAG) Module is introduced between the Encoder end and the Decoder end, aiming to provide Decoder with global context feature information of different stages by using skip connection. Furthermore, we propose a Scale-Aware Inception Fusion Module (SIF Module) to fuse contextual feature information of different scales. In addition, we introduce a Deep Supervision mechanism at the Decoder end, and use an auxiliary loss function during upsampling to improve segmentation performance. In the experiment, we adopt two medical image segmentation data sets: (1) Segmentation of Thoracic Organs at Risk in CT (Computed Tomography) Images Dataset (SegTHOR Dataset), and (2) Liver Tumor Segmentation Dataset (LiTS Dataset). The experimental results show that the method proposed in this study performs better than other segmentation methods do.

    致謝 I 摘要 II Abstract III 目錄 IV 圖目錄 VII 表目錄 IX 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 1.3 論文架構 2 第二章 相關工作 4 2.1 卷積神經網路之基本架構 4 2.1.1 卷積層 5 2.1.2 激勵函數 6 2.1.3 池化層 7 2.1.4 全連接層 8 2.2 衍生的卷積神經網路架構 9 2.2.1 Inception 9 2.2.2 ResNet 10 2.2.3 DenseNet 12 2.2.4 SE-Net 14 2.3 用於醫學影像分割之卷積神經網路架構 16 2.3.1 U-Net 16 2.3.2 V-Net 17 2.4 卷積神經網路的文獻回顧 18 2.4.1 醫學影像分割之文獻回顧 18 2.4.2 上下文特徵訊息之文獻回顧 24 第三章 研究方法 25 3.1 本研究所提出的方法 25 3.1.1 特徵編碼器與特徵解碼器 26 3.1.2 全局特徵空間注意力引導模塊 27 3.1.3 尺度感知Inception融合模塊 30 3.1.4 深度監督機制 34 3.2 損失函數 34 3.2.1 Cross entropy loss function 34 3.2.2 Dice loss function 35 3.2.3 Tversky loss function 35 3.2.4 本研究所使用的損失函數 36 第四章 實驗設計與結果 37 4.1 實驗環境 37 4.2 醫學影像資料集 38 4.2.1 SegTHOR資料集 38 4.2.2 LiTS資料集 39 4.3 資料預處理 40 4.4 評估指標 41 4.4.1 骰子相似性係數 41 4.4.2 靈敏度 41 4.4.3 Hausdorff 距離 42 4.5 實驗結果比較 43 4.5.1 消融研究 43 4.5.2 不同方法之比較結果 58 4.5.3 不同方法之參數量和訓練時間 70 第五章 結論與未來展望 71 5.1 結論 71 5.2 未來展望 72 參考文獻 73

    [1] H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol 71, no. 3, pp. 209-249, February 2021, DOI:10.3322/caac.21660.
    [2] H. Greenspan, B. V. Ginneken, and R. M. Summers, “Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1153-1159, May 2016, DOI: 10.1109/TMI.2016.2553401.
    [3] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A.W.M. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60-88, December 2017.
    [4] M. G. Linguraru, W. J. Richbourg, J. Liu, J. M. Watt, V. Pamulapati, S. Wang, and R. M. Summers, “Tumor burden analysis on computed tomography by automated liver and tumor segmentation,” IEEE Transactions on Medical Imaging, vol. 31, no. 10, pp. 1965-1976, October 2012, DOI: 10.1109/TMI.2012.2211887.
    [5] G. Chartrand, T. Cresson, R. Chav, A. Gotra, A. Tang, and J. A. De Guise, “Liver segmentation on CT and MR using Laplacian mesh optimization,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 9, pp. 2110-2121, September 2017, DOI: 10.1109/TBME.2016.2631139.
    [6] E. Trivizakis, G. C. Manikis, K. Nikiforaki, K. Drevelegas, M. Constantinides, A. Drevelegas, and K. Marias, “Extending 2-D convolutional neural networks to 3-D for advancing deep learning cancer classification with application to MRI liver tumor differentiation,” IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 3, pp. 923-930, May 2019, DOI: 10.1109/JBHI.2018.2886276.
    [7] M. Rela, N. R. Suryakari, and P. R. Reddy, “Liver tumor segmentation and classification: A systematic review,” in Proc. IEEE-HYDCON, Hyderabad, India, September 11-12, 2020, pp. 1-6, DOI: 10.1109/HYDCON48903.2020.9242757.
    [8] C. Ma, G. Luo, and K. Wang, “Concatenated and connected random forests with multiscale patch driven active contour model for automated brain tumor segmentation of MR images,” IEEE Transactions on Medical Imaging, vol. 37, no. 8, pp. 1943-1954, August 2018, DOI: 10.1109/TMI.2018.2805821.
    [9] A. Pinto, S. Pereira, D. Rasteiro, and C. A. Silva, “Hierarchical brain tumour segmentation using extremely randomized trees,” Pattern Recognition, vol. 82, pp. 105-117, October 2018, DOI: 10.1016/j.patcog.2018.05.006.
    [10] Z. Tang, S. Ahmad, P. -T. Yap, and D. Shen, “Multi-atlas segmentation of MR tumor brain images using low-rank based image recovery,” IEEE Transactions on Medical Imaging, vol. 37, no. 10, pp. 2224-2235, October 2018, DOI: 10.1109/TMI.2018.2824243.
    [11] M. Ghaffari, A. Sowmya, and R. Oliver, “Automated brain tumor segmentation using multimodal brain scans: A survey based on models submitted to the BraTS 2012-2018 Challenges,” IEEE Reviews in Biomedical Engineering, vol. 13, pp. 156-168, October 2019, DOI: 10.1109/RBME.2019.2946868.
    [12] G. Yogalakshmi and B. S. Rani, “A review on the techniques of brain tumor: Segmentation, feature extraction and classification,” in Proc. International Conference on Computing, Communication and Networking Technologies, Kharagpur, India, July 1-3 2020, pp. 1-6, DOI: 10.1109/ICCCNT49239.2020.9225472.
    [13] S. Wang, M. Zhou, Z. Liu, Z. Liu, D. Gu, Y. Zang, D. Dong, O. Gevaert, and J. Tian, “Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation,” Medical Image Analysis, vol. 40, pp. 172-183, August 2017, DOI: 10.1016/j.media.2017.06.014.
    [14] J. Song, C. Yang, Li Fan, K. Wang, F. Yang, S. Liu, and J. Tian, “Lung lesion extraction using a toboggan based growing automatic segmentation approach,” IEEE Transactions on Medical Imaging, vol. 35, no. 1, pp. 337-353, January 2016, DOI: 10.1109/TMI.2015.2474119.
    [15] E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S. P. Pereira, M. J. Clarkson, and D. C. Barratt, “Automatic multi-organ segmentation on abdominal CT with dense V-networks,” IEEE Transactions on Medical Imaging, vol. 37, no. 8, pp. 1822-1834, August 2018, DOI: 10.1109/TMI.2018.2806309.
    [16] R. Kéchichian, S. Valette, and M. Desvignes, “Automatic multiorgan segmentation via multiscale registration and Graph Cut,” IEEE Transactions on Medical Imaging, vol. 37, no. 12, pp. 2739-2749, December 2018, DOI: 10.1109/TMI.2018.2851780.
    [17] S. Ren, P. Laub, Y. Lu, M. Naganawa, and R. E. Carson, “Atlas-based multiorgan segmentation for dynamic abdominal PET,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 4, no. 1, pp. 50-62, January 2020, DOI: 10.1109/TRPMS.2019.2926889.
    [18] T. He, J. Hu, Y. Song, J. Guo, and Z. Yi, “Multi-task learning for the segmentation of organs at risk with label dependence,” Medical Image Analysis, vol. 61, pp. 1-11, April 2020, DOI: 10.1016/j.media.2020.101666.
    [19] X. Fang and P. Yan, “Multi-organ segmentation over partially labeled datasets with multi-scale feature abstraction,” IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3619-3629, November 2020, DOI: 10.1109/TMI.2020.3001036.
    [20] T. Hassanzadeh, D. Essam, and R. Sarker, “2D to 3D evolutionary deep convolutional neural networks for medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 2, pp. 712-721, February 2021, DOI: 10.1109/TMI.2020.3035555.
    [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. International Conference on Neural Information Processing Systems, Nevada, USA, December 3-6, 2012, pp. 1097-1105.
    [22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, USA, June 7-12, 2015, pp. 1-9, DOI: 10.1109/CVPR.2015.7298594.
    [23] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA, June 27-30, 2016, pp. 770-778, DOI: 10.1109/CVPR.2016.90.
    [24] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, USA, July 21-26, 2017, pp. 4700–4708, DOI: 10.1109/CVPR.2017.243.
    [25] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 8, pp. 2011-2023, August 2020, DOI: 10.1109/TPAMI.2019.2913372.
    [26] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, USA, June 7-12, 2015, pp. 3431-3440, DOI: 10.1109/CVPR.2015.7298965.
    [27] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, November 18, 2015, pp. 234-241, DOI: 10.1007/978-3-319-24574-4_28.
    [28] F. Milletari, N. Navab, and S. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” in Proc. International Conference on 3D Vision, Stanford, California, USA, October 25-28, 2016, pp. 565-571, DOI: 10.1109/3DV.2016.79.
    [29] E. Vorontsov, A. Tang, C. Pal, and S. Kadoury, “Liver lesion segmentation informed by joint liver segmentation,” in Proc. IEEE 15th International Symposium on Biomedical Imaging, Washington, District of Columbia, USA, April 4-7, 2018, pp. 1332-1335, DOI: 10.1109/ISBI.2018.8363817.
    [30] A. A. Albishri, S. J. H. Shah, and Y. Lee, “CU-Net: Cascaded U-Net model for Automated liver and lesion segmentation and summarization,” in Proc. IEEE International Conference on Bioinformatics and Biomedicine, San Diego, California, USA, November 18-21, 2019, pp. 1416-1423, DOI: 10.1109/BIBM47256.2019.8983266.
    [31] X. Li, H. Chen, X. Qi, Q. Dou, C. -W. Fu, and P. -A. Heng, “H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes,” IEEE Transactions on Medical Imaging, vol. 37, no. 12, pp. 2663-2674, December 2018, DOI: 10.1109/TMI.2018.2845918.
    [32] M. Rezaei, H. Yang, and C. Meinel, “Instance tumor segmentation using multitask convolutional neural network,” in Proc. International Joint Conference on Neural Networks, July 8-13, 2018, Rio de Janeiro, Brazil, DOI: 10.1109/IJCNN.2018.8489105.
    [33] H. Seo, C. Huang, M. Bassenne, R. Xiao, and L. Xing, “Modified U-Net (mU-Net) with incorporation of object-dependent high level features for improved liver and liver-tumor segmentation in CT images,” IEEE Transactions on Medical Imaging, vol. 39, no. 5, pp. 1316-1325, May 2020, DOI: 10.1109/TMI.2019.2948320.
    [34] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 1856-1867, June 2020, DOI: 10.1109/TMI.2019.2959609.
    [35] L. Chen, P. Bentley, K. Mori, K. Misawa, M. Fujiwara, and D. Rueckert, “DRINet for medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 37, no. 11, pp. 2453-2462, November 2018, DOI: 10.1109/TMI.2018.2835303.
    [36] X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. Fan, “A deep learning model integrating FCNNs and CRFs for brain tumor segmentation,” Medical Image Analysis, vol. 43, pp.98-111, January 2018, DOI: 10.1016/j.media.2017.10.002.
    [37] J. Dolz, K. Gopinath, J. Yuan, H. Lombaert, C. Desrosiers, and I. B. Ayed, “HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation,” IEEE Transactions on Medical Imaging, vol. 38, no. 5, pp. 1116-1126, May 2019, DOI: 10.1109/TMI.2018.2878669.
    [38] Y. Nie, H. Ding, Y. Shang, Z. Shao, and T. Liu, “Spatial attention-based efficiently features fusion network for 3D-MR brain tumor segmentation,” in Proc. IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, December 18-20, 2020, pp. 67-74, DOI: 10.1109/PIC50277.2020.9350767.
    [39] S. Pereira, A. Pinto, J. Amorim, A. Ribeiro, V. Alves, and C. A. Silva, “Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks,” IEEE Transactions on Medical Imaging, vol. 38, no. 12, pp. 2914-2925, December 2019, DOI: 10.1109/TMI.2019.2918096.
    [40] S. Zhou, D. Nie, E. Adeli, J. Yin, J. Lian, and D. Shen, “High-resolution encoder–decoder networks for low-contrast medical image segmentation,” IEEE Transactions on Image Processing, vol. 29, pp. 461-475, 2020, DOI: 10.1109/TIP.2019.2919937.
    [41] J. Zhang, Y. Xie, Y. Wang, and Y. Xia, “Inter-slice context residual learning for 3D medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 2, pp. 661-672, February 2021, DOI: 10.1109/TMI.2020.3034995.
    [42] A. Sinha and J. Dolz, “Multi-scale self-guided attention for medical image segmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 1, pp. 121-130, January 2021, DOI: 10.1109/JBHI.2020.2986926.
    [43] Y. Wang, Z. Deng, X. Hu, L. Zhu, X. Yang, X. Xu, P-A. Heng, and D. Ni, “Deep attentional features for prostate segmentation in ultrasound”. in Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, September 13, 2018, pp. 523-530, DOI: 10.1007/978-3-030-00937-3_60.
    [44] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual Attention Network for Scene Segmentation,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, California, USA, June 15-20, 2019, pp. 3141-3149, DOI: 10.1109/CVPR.2019.00326.
    [45] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao, “Joint optic disc and cup segmentation based on multi-label deep network and polar transformation,” IEEE Transactions on Medical Imaging, vol. 37, no. 7, pp. 1597-1605, July 2018, DOI: 10.1109/TMI.2018.2791488.
    [46] O. Oktay, E. Ferrante, K. Kamnitsas, M. Heinrich, W. Bai, J. Caballero, S. A. Cook, A. De Marvao, T. Dawes, D. P. O’Regan, B. Kainz, B. Glocker, and D. Rueckert, “Anatomically Constrained Neural Networks (ACNNs): Application to cardiac image enhancement and segmentation,” IEEE Transactions on Medical Imaging, vol. 37, no. 2, pp. 384-395, February 2018, DOI: 10.1109/TMI.2017.2743464.
    [47] X. Li, Q. Dou, H. Chen, C.-W. Fu, X. Qi, D. L. Belavý, G. Armbrecht, D. Felsenberg, G. Zheng, and P.-A. Heng, “3D multi-scale FCN with random modality voxel dropout learning for intervertebral disc localization and segmentation from multimodality MR images,” Medical Image Analysis, vol. 45, pp. 41-54, April 2018, DOI: 10.1016/j.media.2018.01.004.
    [48] S. Qamar, H. Jin, R. Zheng, and M. Faizan, “Hybrid loss guided densely connected convolutional neural network for ischemic stroke lesion segmentation,” in Proc. IEEE International Conference for Convergence in Technology, Bombay, India, March 29-31, 2019, pp. 1-5, DOI: 10.1109/I2CT45611.2019.9033802.
    [49] H. Jia, Y. Xia, Y. Song, D. Zhang, H. Huang, Y. Zhang, and W. Cai, “3D APA-Net: 3D adversarial pyramid anisotropic convolutional network for prostate segmentation in MR Images,” IEEE Transactions on Medical Imaging, vol. 39, no. 2, pp. 447-457, February 2020, DOI: 10.1109/TMI.2019.2928056.
    [50] M. Li, Y. Chen, Z. Ji, K. Xie, S. Yuan, Q. Chen, and S. Li, “Image projection network: 3D to 2D image segmentation in OCTA images,” IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3343-3354, November 2020, DOI: 10.1109/TMI.2020.2992244.
    [51] N. Ibtehaz and M. S. Rahman, “MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74-87, January 2020, DOI: 10.1016/j.neunet.2019.08.025.
    [52] C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, “SA-UNet: Spatial attention U-Net for retinal vessel segmentation,” in Proc. International Conference on Pattern Recognition, Milan, Italy, January 10-15, 2021, pp. 1236-1242, DOI: 10.1109/ICPR48806.2021.9413346.
    [53] H. Wu, W. Wang, J. Zhong, B. Lei, Z. Wen, and J. Qin, “SCS-Net: A scale and context sensitive network for retinal vessel segmentation,” Medical Image Analysis, vol. 70, pp. 1-16, May 2021, DOI: 10.1016/j.media.2021.102025.
    [54] R. Gu, G. Wang, T. Song, R. Huang, M. Aertsen, J. Deprest, S. Ourselin, T. Vercauteren, and S. Zhang, “CA-Net: Comprehensive attention convolutional neural networks for explainable medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 2, pp. 699-711, February 2021, DOI: 10.1109/TMI.2020.3035253.
    [55] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, USA, July 21-26, 2017, pp. 6230-6239, DOI: 10.1109/CVPR.2017.660.
    [56] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834-848, April 2018, DOI: 10.1109/TPAMI.2017.2699184.
    [57] Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, T. Zhang, S. Gao, and J. Liu, “CE-Net: Context encoder network for 2D medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 38, no. 10, pp. 2281-2292, October 2019, DOI: 10.1109/TMI.2019.2903562.
    [58] S. Feng, H. Zhao, F. Shi, X. Cheng, M. Wang, Y. Ma, D. Xiang, W. Zhu, and X. Chen, “CPFNet: context pyramid fusion network for medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 39, no. 10, pp. 3008-3018, October 2020, DOI: 10.1109/TMI.2020.2983721.
    [59] X. Li, W. Wang, X. Hu, and J. Yang, “Selective kernel networks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, California, USA, June 15-20, 2019, pp. 510-519, DOI: 10.1109/CVPR.2019.00060.
    [60] S. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmentation using 3D fully convolutional deep networks,” in Proc. Machine Learning in Medical Imaging, Springer, Cham, September 07, 2017, pp. 379-387, DOI: 10.1007/978-3-319-67389-9_44.
    [61] Z. Lambert, C. Petitjean, B. Dubray, and S. Kuan, “SegTHOR: Segmentation of thoracic organs at risk in CT images,” in Proc. International Conference on Image Processing Theory, Tools and Applications, Paris, France, November 9-12, 2020, pp. 1-6, DOI: 10.1109/IPTA50016.2020.9286453.
    [62] P. Bilic, P. F. Christ, E. Vorontsov, G. Chlebus, H. Chen, Q. Dou, C.-W. Fu, X. Han, P.-A. Heng, J. Hesser, S. Kadoury, T. K. Konopczynski, M. Le, C. Li, X. Li, J. Lipková, J. S. Lowengrub, H. Meine, J. H. Moltz, C. J. Pal, M. Piraud, X. Qi, J. Qi, M. Rempfler, K. Roth, A. Schenk, A. Sekuboyina, P. Zhou, C. Hülsemeyer, M. J. Beetz, F. Ettlinger, F. Grün, G. A. Kaissis, F. Lohöfer, R. F. Braren, J. Holch, F. Hofmann, W. H. Sommer, V. Heinemann, C. Jacobs, G. E. H. Mamani, B. van Ginneken, G. Chartrand, A. Tang, M. Drozdzal, A. Ben-Cohen, E. Klang, M. M. Amitai, E. Konen, H. Greenspan, J. Moreau, A. Hostettler, L. Soler, R. Vivanti, A. Szeskin, N. LevCohain, J. Sosna, L. Joskowicz, and B. H. Menze, “The liver tumor segmentation benchmark (LiTS),” 2019, arXiv:1901.04056. [Online]. Available: http://arxiv.org/abs/1901.04056.
    [63] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, M. J. Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Proc. International Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, Cham, September 09, 2017, pp. 240-248, DOI: 10.1007/978-3-319-67558-9_28.
    [64] S. Wang, S. Cao, Z. Chai, D. Wei, K. Ma, L. Wang, and Y. Zheng, “Conquering Data Variations in resolution: A slice-aware multi-branch decoder network,” IEEE Transactions on Medical Imaging, vol. 39, no. 12, pp. 4174-4185, December 2020, DOI: 10.1109/TMI.2020.3014433.
    [65] N. Abraham and N. M. Khan, "A novel focal Tversky loss function with improved attention U-Net for lesion segmentation,” in Proc. International Symposium on Biomedical Imaging, Venice, Italy, April 8-11, 2019, pp. 683-687, DOI: 10.1109/ISBI.2019.8759329.
    [66] J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, and D. Rueckert, “Attention gated networks: Learning to leverage salient regions in medical images,” Medical Image Analysis, vol. 53, April 2019, pp. 197-207, DOI: 10.1016/j.media.2019.01.012.
    [67] Y. Wang, H. Dou, X. Hu, L. Zhu, X. Yang, M. Xu, J. Qin, P.-A. Heng, T. Wang, and D. Ni, “Deep attentive features for prostate segmentation in 3D transrectal ultrasound,” IEEE Transactions on Medical Imaging, vol. 38, no. 12, December 2019, pp. 2768-2778, DOI: 10.1109/TMI.2019.2913184.
    [68] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, L. Lanczi, E. Gerstner, M.-A. Weber, T. Arbel, B. B. Avants, N. Ayache, P. Buendia, D. L. Collins, N. Cordier, J. J. Corso, A. Criminisi, T. Das, H. Delingette, C. Demiralp, C. R. Durst, M. Dojat, S. Doyle, J. Festa, F. Forbes, E. Geremia, B. Glocker, P. Golland, X. Guo, A. Hamamci, K. M. Iftekharuddin, R. Jena, N.M. John, E. Konukoglu, D. Lashkari, J. A. Mariz, R. Meier, S. Pereira, D. Precup, S. J. Price, T. R. Raviv, S. M. S. Reza, M. Ryan, D. Sarikaya, L. Schwartz, H.-C. Shin, J. Shotton, C. A. Silva, N. Sousa, N. K. Subbanna, G. Szekely, T. J. Taylor, O. M. Thomas, N. J. Tustison, G. Unal, F. Vasseur, M. Wintermark, D. H. Ye, L. Zhao, B. Zhao, D. Zikic, M. Prastawa, M. Reyes, and K. V. Leemput, “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE Transactions on Medical Imaging, vol. 34, no. 10, October 2015, pp. 1993-2024.

    無法下載圖示 全文公開日期 2023/10/21 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE