簡易檢索 / 詳目顯示

研究生: 劉佳豪
Jia-Hao Liu
論文名稱: 兩階段串聯式 CNN 模型用於三維電子顯微鏡下粒線體切割技術
Two-Stage Cascaded CNN Model for 3D Mitochondria EM Segmentation
指導教授: 花凱龍
Kai-Lung Hua
郭景明
Jing-Ming Guo
口試委員: 康立威
Li-Wei Kang
丁建均
Jian-Jiun Ding
高文忠
Wen-Chung Kao
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 113
中文關鍵詞: 三維粒線體切割電鏡影像分析3D CNN
外文關鍵詞: Mitochondria Segmentation, Image Analysis in Electron Microscopy, 3D CNN
相關次數: 點閱:161下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 粒線體是細胞當中相當重要的胞器,為細胞提供能量的來源。許多醫學研究中發現粒線體的數量、結構以及形態的變化與許多的疾病息息相關,例如癌症、阿茲罕默症與帕金森氏症等等。隨著電子顯微鏡(Electron Microscopy, EM)與連接組學影像日趨發展,粒線體的形態在 EM 影像下能更清楚的被觀察到,進而輔助研究人員或醫療人員在臨床上做出正確的分析與診斷;然而,要從 3D 的 EM 影像中手動地切割出粒線體輪廓並不是一件簡單的任務。本論文針對 EM 影像提出一個兩階段串聯式 CNN 網路架構以重建3D 粒線體的切割結果。現有的 3D 粒線體切割研究分為自上而下(Top-down)與自下而上(Bottom-up)兩種方式。Top-down 的切割方式考慮每張二維影像在切割上的整體佈局,且先偵測後切割,所以切割後的粒線體邊緣較整齊漂亮,但缺點是缺乏連續二維影像間的連通資訊,導致部分二維切面上產生大面積的粒線體漏檢,或是在結構上形似粒線體的亞細胞過檢的案例。Bottom-up 切割方式因考量到連續影像在三維上的連通特性,所以不太會有大面積的粒線體漏檢,或是結構上形似粒線體的亞細胞過檢的情況,但其缺點為切割上會有小面積的過檢而產生斑駁或部分粒線體切割不夠完整的情形。雖然小面積的斑駁情況在切割效能計算上影響不大,但在粒線體的數量分析上會造成嚴重的誤判。
    為了解決上述兩種切割方式的缺點並結合兩者之優點,所提出的串聯架構在第一階段先利用物件偵測模型將二維切面上的粒線體位置偵測出來,以作為第二階段的切割線索;而第二階段的輸入為原影像並接第一階段多層融合的偵測結果再利用 3D Res-UNet 訓練一次,讓切割模型學習影像間的連通特性,以避免大面積上的漏檢或過檢的情況發生。實驗結果顯示,本論文中所提出的兩階段串聯架構能有效地將 Top-down 與 Bottom-up的優缺點互補,達到截長補短的效果;不僅提升整體的切割效能,更重要的是改善了因切割斑駁而造成臨床上粒線體在數量分析上不夠精準的問題,達到雙贏結果。


    Mitochondria are the organelles that generate energy for the cells. Many studies have suggested that mitochondrial dysfunction or impairment may be related to cancer, Alzheimer’s and Parkinson’s diseases. Therefore, morphologically detailed alterations in mitochondria and 3D reconstruction of mitochondria are highly demanded for research analysis and clinical diagnosis. Nevertheless, manual segmentation of mitochondria over 3D electron microscopy volumes is definitely not a trivial task. In this study, a two-stage cascaded CNN architecture is proposed to achieve automated 3D mitochondria segmentation, which combines the merits of top-down approaches and bottom-up approaches. For top-down approaches, the segmentation is conducted in consideration of the information of objects’ localization so that the delineations of objects’ contours can be more precise. However, the combinations of 2D segmentation results by the top-down approaches are inadequate to perform decent 3D segmentation without the information of connectivity among frames. On the other hand, the bottom-up approaches are to find coherent groups of pixels and take the information of 3D connectivity into account in segmentation to avoid the drawbacks of the 2D top-down approaches. Yet, many small areas that share similar pixel properties with mitochondria become false positives easily due to the insufficient information of objects’ localization. In the proposed method, the detection of mitochondria is carried out with multi-slice fusion in the first stage, becoming the segmentation cues.
    Subsequently, the second stage is to perform 3D CNN segmentation that learns the pixel properties and the information of 3D connectivity under the supervision of the cues from the detection stage. Experimental results show that the proposed structure alleviates the problems in both top-down and bottom-up approaches, which not only accomplishes better performance in segmentation but also facilitates the clinical analysis significantly.

    摘要 I Abstract II 誌謝 III 目錄 IV 圖索引 VI 表索引 1 第一章 緒論 2 1.1 研究背景與動機 2 1.2 論文架構 4 第二章 基於卷積神經網路物件偵測和切割文獻探討 5 2.1 類神經網路 8 2.1.1 前向傳播(Forward Propagation) 8 2.1.2 反向傳播(Backward Propagation) 11 2.1.3 卷積神經網路 16 2.1.4 卷積神經網路架構發展 21 2.2 物件偵測之相關文獻 24 2.2.1 一階段物件偵測 25 2.2.2 二階段物件偵測 30 2.3 切割之相關文獻 31 2.3.1 Fully Convolutional Networks (FCN) 32 2.3.2 U-Net 33 2.3.3 Mask R-CNN 34 2.3.4 3D Res-UNet 37 2.4 粒線體切割之相關演算法 38 2.4.1 Electron Microscopy簡介 39 2.4.2 Top-down approaches 40 2.4.3 Bottom-up approaches 43 2.4.4 2D與3D之效能比較 48 第三章 兩階段架構式粒線體切割技術 49 3.1 資料集 49 3.2 二維切割方法 54 3.3 三維切割方法 58 3.4 融合二維和三維優勢之訓練策略 60 3.4.1 策略一 61 3.4.2 策略二 66 3.4.3 策略三 69 第四章 實驗數據與結果 77 4.1 訓練環境 77 4.2 資料庫與資料前處理 77 4.3 評估標準介紹 78 4.4 結果分析與比較 80 4.4.1 研究方法結果之分析 81 4.4.2 Ablation Analysis 90 第五章 結論與未來展望 93 參考文獻 94

    [1] E. Mumcuoglu, R. Hassanpour, S. Tasel, G. Perkins, M. Martone, and M. Gurcan, "Computerized detection and segmentation of mitochondria on electron microscope images," Journal of microscopy, vol. 246, no. 3, pp. 248-265, 2012.
    [2] S. Campello and L. Scorrano, "Mitochondrial shape changes: orchestrating cell pathophysiology," EMBO reports, vol. 11, no. 9, pp. 678-684, 2010.
    [3] M. B. De Moura, L. S. dos Santos, and B. Van Houten, "Mitochondrial dysfunction in neurodegenerative diseases and cancer," Environmental and molecular mutagenesis, vol. 51, no. 5, pp. 391-405, 2010.
    [4] S. Fulda, L. Galluzzi, and G. Kroemer, "Targeting mitochondria for cancer therapy," Nature reviews Drug discovery, vol. 9, no. 6, pp. 447-464, 2010.
    [5] G. Kroemer, "Mitochondria in cancer," Oncogene, vol. 25, no. 34, 2006.
    [6] D. Lee, K.-H. Lee, W.-K. Ho, and S.-H. Lee, "Target cell-specific involvement of presynaptic mitochondria in post-tetanic potentiation at hippocampal mossy fiber synapses," Journal of Neuroscience, vol. 27, no. 50, pp. 13603-13613, 2007.
    [7] T. Kasahara et al., "Depression-like episodes in mice harboring mtDNA deletions in paraventricular thalamus," Molecular psychiatry, vol. 21, no. 1, pp. 39-48, 2016.
    [8] M. Zeviani and S. Di Donato, "Mitochondrial disorders," Brain, vol. 127, no. 10, pp. 2153-2172, 2004.
    [9] A. C. Poole, R. E. Thomas, L. A. Andrews, H. M. McBride, A. J. Whitworth, and L. J. Pallanck, "The PINK1/Parkin pathway regulates mitochondrial morphology," Proceedings of the National Academy of Sciences, vol. 105, no. 5, pp. 1638-1643, 2008.
    [10] D. Franco-Barranco, A. Muñoz-Barrutia, and I. Arganda-Carreras, "Stable deep neural network architectures for mitochondria segmentation on electron microscopy volumes," arXiv preprint arXiv:2104.03577, 2021.
    [11] 中華民國衛生福利部. "109年國人死因統計結果." https://www.mohw.gov.tw/cp-5017-61533-1.html (accessed.
    [12] A. Jorstad and P. Fua, "Refining mitochondria segmentation in electron microscopy imagery with active surfaces," in European Conference on Computer Vision, 2014: Springer, pp. 367-379.
    [13] A. Lucchi, Y. Li, and P. Fua, "Learning for structured prediction using approximate subgradient descent with working sets," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1987-1994.
    [14] P. Márquez-Neila, P. Kohli, C. Rother, and L. Baumela, "Non-parametric higher-order random fields for image segmentation," in European Conference on Computer Vision, 2014: Springer, pp. 269-284.
    [15] A. Lucchi, K. Smith, R. Achanta, G. Knott, and P. Fua, "Supervoxel-Based Segmentation of Mitochondria in EM Image Stacks With Learned Shape Features," IEEE Transactions on Medical Imaging, vol. 2, no. 31, pp. 474-486, 2012.
    [16] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, 2015: Springer, pp. 234-241.
    [17] D. Wei et al., "Mitoem dataset: Large-scale 3d mitochondria instance segmentation from em images," in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2020: Springer, pp. 66-76.
    [18] Y. Meirovitch, L. Mi, H. Saribekyan, A. Matveev, D. Rolnick, and N. Shavit, "Cross-classification clustering: An efficient multi-object tracking technique for 3-d instance segmentation in connectomics," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8425-8435.
    [19] Y. Xu et al., "Gland instance segmentation using deep multichannel neural networks," IEEE Transactions on Biomedical Engineering, vol. 64, no. 12, pp. 2901-2912, 2017.
    [20] Z. Yan, X. Yang, and K.-T. T. Cheng, "A deep model with shape-preserving loss for gland instance segmentation," in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2018: Springer, pp. 138-146.
    [21] M.-C. Popescu, V. E. Balas, L. Perescu-Popescu, and N. Mastorakis, "Multilayer perceptron and neural networks," WSEAS Transactions on Circuits and Systems, vol. 8, no. 7, pp. 579-588, 2009.
    [22] V. Nair and G. E. Hinton, "Rectified linear units improve restricted boltzmann machines," in Icml, 2010.
    [23] B. Xu, N. Wang, T. Chen, and M. Li, "Empirical evaluation of rectified activations in convolutional network," arXiv preprint arXiv:1505.00853, 2015.
    [24] C. Gulcehre, M. Moczulski, M. Denil, and Y. Bengio, "Noisy activation functions," in International conference on machine learning, 2016: PMLR, pp. 3059-3068.
    [25] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, "Fast and accurate deep network learning by exponential linear units (elus)," arXiv preprint arXiv:1511.07289, 2015.
    [26] X. Glorot, A. Bordes, and Y. Bengio, "Deep sparse rectifier neural networks," in Proceedings of the fourteenth international conference on artificial intelligence and statistics, 2011: JMLR Workshop and Conference Proceedings, pp. 315-323.
    [27] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
    [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, pp. 1097-1105, 2012.
    [29] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
    [30] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International Conference on Machine Learning, 2019: PMLR, pp. 6105-6114.
    [31] Z. Zou, Z. Shi, Y. Guo, and J. Ye, "Object detection in 20 years: A survey," arXiv preprint arXiv:1905.05055, 2019.
    [32] L. Hoseong. "Deep learning object detection paper list." https://github.com/hoya012/deep_learning_object_detection (accessed.
    [33] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
    [34] K. He, X. Zhang, S. Ren, and J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904-1916, 2015.
    [35] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
    [36] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
    [37] J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
    [38] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020.
    [39] M. Tan, R. Pang, and Q. V. Le, "Efficientdet: Scalable and efficient object detection," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10781-10790.
    [40] Alexey. "YOLOv4 MC COCO Object Detection competition." https://github.com/AlexeyAB/darknet (accessed.
    [41] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, "Cbam: Convolutional block attention module," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
    [42] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, "Scaled-yolov4: Scaling cross stage partial network," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13029-13038.
    [43] R. Girshick, "Fast r-cnn," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448.
    [44] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, pp. 91-99, 2015.
    [45] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, "Selective search for object recognition," International journal of computer vision, vol. 104, no. 2, pp. 154-171, 2013.
    [46] I. Steinwart and A. Christmann, Support vector machines. Springer Science & Business Media, 2008.
    [47] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
    [48] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
    [49] R. Solovyev, A. A. Kalinin, and T. Gabruseva, "3D Convolutional Neural Networks for Stalled Brain Capillary Detection," arXiv preprint arXiv:2104.01687, 2021.
    [50] M. Seyedhosseini, M. H. Ellisman, and T. Tasdizen, "Segmentation of mitochondria in electron microscopy images using algebraic curves," in 2013 IEEE 10th International Symposium on Biomedical Imaging, 2013: IEEE, pp. 860-863.
    [51] R. Kumar, A. Vázquez-Reina, and H. Pfister, "Radon-like features and their application to connectomics," in 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, 2010: IEEE, pp. 186-193.
    [52] A. Lucchi, K. Smith, R. Achanta, G. Knott, and P. Fua, "Supervoxel-based segmentation of mitochondria in em image stacks with learned shape features," IEEE transactions on medical imaging, vol. 31, no. 2, pp. 474-486, 2011.
    [53] J. Peng and Z. Yuan, "Mitochondria segmentation from em images via hierarchical structured contextual forest," IEEE journal of biomedical and health informatics, vol. 24, no. 8, pp. 2251-2259, 2019.
    [54] C. Xiao et al., "Automatic mitochondria segmentation for EM data using a 3D supervised convolutional network," Frontiers in neuroanatomy, vol. 12, p. 92, 2018.
    [55] D. Ciresan, A. Giusti, L. Gambardella, and J. Schmidhuber, "Deep neural networks segment neuronal membranes in electron microscopy images," Advances in neural information processing systems, vol. 25, pp. 2843-2851, 2012.
    [56] T. Zeng, B. Wu, and S. Ji, "DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation," Bioinformatics, vol. 33, no. 16, pp. 2555-2562, 2017.
    [57] V. Baena et al., "FIB-SEM as a Volume Electron Microscopy Approach to Study Cellular Architectures in SARS-CoV-2 and Other Viral Infections: A Practical Primer for a Virologist," Viruses, vol. 13, no. 4, p. 611, 2021.
    [58] J. Liu, W. Li, C. Xiao, B. Hong, Q. Xie, and H. Han, "Automatic detection and segmentation of mitochondria from SEM images using deep neural network," in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018: IEEE, pp. 628-631.
    [59] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
    [60] W. Li, H. Deng, Q. Rao, Q. Xie, X. Chen, and H. Han, "An automated pipeline for mitochondrial segmentation on atum-sem stacks," Journal of bioinformatics and computational biology, vol. 15, no. 03, p. 1750015, 2017.
    [61] J. Liu et al., "Automatic reconstruction of mitochondria and endoplasmic reticulum in electron microscopy volumes by deep learning," Frontiers in neuroscience, vol. 14, p. 599, 2020.
    [62] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, "3D U-Net: learning dense volumetric segmentation from sparse annotation," in International conference on medical image computing and computer-assisted intervention, 2016: Springer, pp. 424-432.
    [63] S. Xie and Z. Tu, "Holistically-nested edge detection," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1395-1403.
    [64] L. Yu, X. Yang, J. Qin, and P.-A. Heng, "3D FractalNet: dense volumetric segmentation for cardiovascular MRI volumes," in Reconstruction, segmentation, and analysis of medical images: Springer, 2016, pp. 103-110.
    [65] V. Casser, K. Kang, H. Pfister, and D. Haehn, "Fast mitochondria detection for connectomics," in Medical Imaging with Deep Learning, 2020: PMLR, pp. 111-120.
    [66] I. Oztel, G. Yolcu, I. Ersoy, T. White, and F. Bunyak, "Mitochondria segmentation in electron microscopy volumes using deep convolutional neural network," in 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2017: IEEE, pp. 1195-1200.
    [67] A. Lucchi, K. Smith, R. Achanta, G. Knott, and P. Fua, "Supervoxel-Based Segmentation of Mitochondria in EM Image Stacks with Learned Shape Features," IEEE Transactions on Medical Imaging, vol. 31, no. 31, pp. 474-486, 2012.
    [68] H.-C. Cheng and A. Varshney, "Volume segmentation using convolutional neural networks with limited training data," in 2017 IEEE international conference on image processing (ICIP), 2017: IEEE, pp. 590-594.
    [69] J.Glenn, "Ultralytics/YOLOv5," GitHub, 2020. [Online]. Available: https://github.com/ultralytics/yolov5.
    [70] C.-Y. Wang, I.-H. Yeh, and H.-Y. M. Liao, "You Only Learn One Representation: Unified Network for Multiple Tasks," arXiv preprint arXiv:2105.04206, 2021.
    [71] M. Ž. Mekuč, C. Bohak, S. Hudoklin, B. H. Kim, M. Y. Kim, and M. Marolt, "Automatic segmentation of mitochondria and endolysosomes in volumetric electron microscopy data," Computers in biology and medicine, vol. 119, p. 103693, 2020.
    [72] X. Xiao, S. Lian, Z. Luo, and S. Li, "Weighted res-unet for high-quality retina vessel segmentation," in 2018 9th international conference on information technology in medicine and education (ITME), 2018: IEEE, pp. 327-331.

    無法下載圖示 全文公開日期 2024/09/22 (校內網路)
    全文公開日期 2026/09/22 (校外網路)
    全文公開日期 2026/09/22 (國家圖書館:臺灣博碩士論文系統)
    QR CODE