簡易檢索 / 詳目顯示

研究生: 楊采玲
Tsai-Ling Yang
論文名稱: 以深度學習法分析大腦磁振影像:腦腫瘤之自動分區
Automatic segmentation of brain tumor from MR images using deep learning
指導教授: 黃騰毅
Teng-Yi Huang
口試委員: 林益如
Yi-Ru Lin
蔡尚岳
Shang-Yueh Tsai
劉益瑞
Yi-Jui Liu
王福年
Fu-Nien Wang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 44
中文關鍵詞: 深度學習腦神經膠質瘤腦磁振造影
外文關鍵詞: deep learning, Glioma, brain magnetic resonance imaging
相關次數: 點閱:314下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 腦神經膠質瘤為大腦中支持組織所產生的腫瘤,平均存活時間只有二至三年,治療方式與腫瘤的位置及大小相關。因此BraTS2017在魁北克舉辦一個相關比賽,希望透過自動化的方式將腦神經膠質瘤術前病患的磁振造影切割出腫瘤的所在位置,並預估該患者可能的存活時間,透過科技的方法協助病患治療。近年來深度學習在影像辨識上有很好的發展,我們這次主要使用的方法為運用二維卷積所組成的神經網路,SegNet網路架構是由英國劍橋大學在2016年所提出。我們所使用的方法其一大特點為輸入影像資料不直接使用三維資料,而是將三維資料轉變為二維資料進行預測,再將輸出之二維影像組回三維影像資料。預測成效的評估可分為三個部分增強腫瘤(enhancing tumor,ET)、腫瘤核心(tumor core,TC)及全腫瘤(whole tumor,WT),最終在正式比賽時預測準確率的平均Dice係數為Dice_ET=0.73、Dice_WT=0.87、Dice_TC=0.76。很榮幸能在一百五十多組中脫穎而出,最終在腫瘤分割的課題中獲得第三名,而在得獎的組別中我們是唯一使用二維影像當作訓練資料,相較於三維資料二維影像所需要的記憶體較少。而存活時間的預測我們在賽後才試著預測,目前最佳的預測結果為平均誤差290.94天,但這項課題並未正式參與比賽,未能評判目前結果的優劣。


    Glioma is a common type of tumor in the brain, which begins in the supportive tissue of the brain that contains the glial cells. The median survival for adults is about two to three years, and the diagnosis is dependent on the location and the size of the tumor. In 2017, we participated in the competition of BraTS 2017 in Quebec. The purpose of this competition was the automatic segmentation of gliomas in pre-operative scans by using MR brain images and predicted the survival of the patient. Hope that the technology can help diagnosis. The principal method based on convolutional neural network called SegNet proposed by University of Cambridge in 2016. One of the characteristics of our methods was that we converted 3D volume to 2D slices as input when the network was trained, and all the 2D outputs were merged to 3D volume image as our final result. There are three sub regions of tumor the enhancing tumor (ET), the tumor core (TC), and the whole tumor (WT) as the indicator of evaluation. Finally, we were able to perform fully automatic segmentation using SegNet and get the average dice coefficient (Dice_ET=0.73,Dice_WT=0.87,Dice_TC=0.76). There are more than hundred fifty teams and it’s our pleasure to get the third prize of the segmentation task in the BraTS 2017 competition. We were the only team using 2D images in prediction and our methods were much easier and simpler than other teams. After the competition, we tried to do the task of survival prediction. Root mean square error 290.94 is our best prediction until now. But we didn’t join this task, we were not sure if our result is better than others.

    中文摘要 i Abstract ii 目錄 iii 圖目錄 v 表目錄 vi 第一章簡介 1 1.1 腦神經膠質瘤 1 1.2 語意分割 3 1.3 卷積神經網路 5 1.3.1 卷積層 6 1.3.2 激勵函數 7 1.3.3 池化層及反池化層 9 1.3.4 全連接層 10 1.3.5 反向傳播 11 1.3.6 代價函數 12 第二章 方法與材料 14 2.1 資料來源 14 2.2 網路架構 16 2.2.1 Task-1:腦腫瘤分割之網路架構 16 2.2.2 Task-2:存活時間使用之網路架構 18 2.3 線性回歸 20 2.4 腦腫瘤分割之預估策略 21 2.5 存活時間預測之預估策略 25 2.6 評估方式 26 第三章 實驗結果 28 3.1 Task-1:腦腫瘤分割 28 3.2 Task-2:存活時間預測 31 第四章 討論與結論 33 參考文獻 37

    1. Ohgaki, H. and P. Kleihues, Population-based studies on incidence, survival rates, and genetic alterations in astrocytic and oligodendroglial gliomas. Journal of Neuropathology and Experimental Neurology, 2005. 64(6): p. 479-489.

    2. Havaei, M., et al., Brain tumor segmentation with Deep Neural Networks. Medical Image Analysis, 2017. 35: p. 18-31.

    3. Upadhyay, N. and A.D. Waldman, Conventional MRI evaluation of gliomas. British Journal of Radiology, 2011. 84: p. S107-S111.

    4. Menze, B.H., et al., The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging, 2015. 34(10): p. 1993-2024.

    5. Shotton, J., et al., Real-Time Human Pose Recognition in Parts from Single Depth Images. Communications of the Acm, 2013. 56(1): p. 116-124.

    6. Ciresan, D., et al. Deep neural networks segment neuronal membranes in electron microscopy images. in Advances in neural information processing systems. 2012.

    7. Shelhamer, E., J. Long, and T. Darrell, Fully Convolutional Networks for Semantic Segmentation. Ieee Transactions on Pattern Analysis and Machine Intelligence, 2017. 39(4): p. 640-651.

    8. Krizhevsky, A., I. Sutskever, and G.E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks. Communications of the Acm, 2017. 60(6): p. 84-90.

    9. Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

    10. Szegedy, C., et al. Going deeper with convolutions. 2015. Cvpr.

    11. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

    12. Badrinarayanan, V., A. Kendall, and R. Cipolla, SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. Ieee Transactions on Pattern Analysis and Machine Intelligence, 2017. 39(12): p. 2481-2495.

    13. Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.

    14. Kamnitsas, K., et al., Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation. arXiv preprint arXiv:1711.01468, 2017.

    15. Isensee, F., et al., Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge. 2017 International MICCAI BraTS Challenge, 2017.

    QR CODE