簡易檢索 / 詳目顯示

研究生: 劉士鋐
Shi-Hong Liu
論文名稱: 基於深度卷積神經網路於裘馨氏肌肉萎縮症之超音波影像自動檢測
Automated Classification of Duchenne Muscular Dystrophy from Ultrasound Images by Using Deep Convolutional Neural Network
指導教授: 廖愛禾
Ai-Ho Liao
口試委員: 崔博翔
Po-Hsiang Tsui
莊賀喬
Ho-Chiao Chuang
沈哲州
Che-Chou Shen
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 醫學工程研究所
Graduate Institute of Biomedical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 80
中文關鍵詞: 裘馨氏肌肉萎縮症卷積神經網路深度學習超音波影像Nakagami影像腓腸肌
外文關鍵詞: Duchenne muscular dystrophy (DMD), Convolutional neural networks, Deep learning, Ultrasound image, Nakagami image, Gastrocnemius
相關次數: 點閱:244下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 裘馨氏肌肉萎縮症(Duchenne muscular dystrophy ,DMD)為一種罕見疾病,由於病例數少,通常難以用影像來自動診斷分類。近期研究顯示,腓腸肌之Nakagami參數影像逐漸成為監測此病症非臥床患者之疾病進展、預測活動減少的合適方法,並顯示出其評估DMD患者之潛力。在本研究中,我們提出了一些基於深度學習的分類方法,藉由多種流行的經典卷積神經網路(CNN)如:LeNet、AlexNet、VGG-16,VGG-19,用於超音波DMD影像疾病分類。由於DMD是一種罕見疾病,因此原始數據集僅包含從45位患者中拍攝的85張B-mode和Nakagami超音波影像,本研究以圖像增強的方式將原始數據集增加至25倍並使用LeNet、AlexNet、VGG16和VGG19作為特徵提取模型;結果表明,基於VGG-16與VGG-19之卷積神經網路搭配遷移學習在依據行走功能分類的任務中最佳的分類準確率在B-mode影像達到86.80%,且在Nakagami影像則為83.90%,基於VGG-16與VGG-19之卷積神經網路搭配遷移學習在依據嚴重程度分類的任務中最佳的分類準確率在B-mode影像達到89.70%,而在Nakagami影像則為80.90%。值得注意的是由於較大型的卷積神經網路如(VGG16、VGG19)其訓練參數量較多,需要更多資料量,因此若不使用遷移學習時會獲得較差的分類性能。本研究透過更複雜的卷積神經網路提供自動且準確地分類DMD超音波影像,並改善了罕見疾病中病例數量不足的局限性。它可以幫助醫生診斷DMD疾病患者,並可擴展到其他超音波影像檢測和識別領域。


    Duchenne muscular dystrophy (DMD) is a rare disorder and is often challenging to classify automatically due to a small number of cases. Recently, changes in the Nakagami parametric imaging for the gastrocnemius muscle were considered to be a suitable method for monitoring disease progression in ambulatory patients, for predicting ambulation loss, and show the potential for evaluating patients with DMD. In our study, some deep learning based LeNet, AlexNet, VGG-16, VGG-19 convolutional neural networks (CNN), which are popular networks, were proposed for DMD classification with ultrasound image. Since DMD is a rare disease, the original dataset consisted only 85 of ultrasound B-mode and Nakagami images taken from 45 patients and increase the original dataset till 25 times by image augmentation method. Then, LeNet, AlexNet, VGG16 and VGG19 were used as the feature extraction models. Depending on the walking function, the results indicate that transfer learning based on VGG-16 and VGG-19 network achieves the best classification accuracy with 86.80% on B-mode image and 83.90% on Nakagami image. According to the disease severity in patients with DMD, transfer learning based on VGG-16 and VGG-19 network achieves the best classification accuracy with 89.70% on B-mode image and 80.90% on Nakagami image. It is important to notice that the larger CNN model without transfer learning have inferior performance due to the larger number of parameters is difficult to adapt to the data. This deep CNN provides automatic and accurate DMD classification with ultrasound images and improves the limitation of a small number of cases in rare disease. It can assist doctors in the diagnosis of DMD and can be extended to other ultrasound image detection and recognition fields.

    中文摘要 ABSTRACT 目錄 圖目錄 表目錄 第1章、緒論 1.1.疾病簡介 1.2.深度學習與醫學影像 1.3.超音波簡介 1.4.肌肉疾病與超音波影像 1.5.肌肉疾病與Nakagami影像 1.6.肌肉疾病與深度學習 第2章、材料與方法 2.1.研究架構 2.2.DMD患者數據集 2.3.超音波量測 2.4.像素強度直方圖 2.5.直方圖比較 2.6.實驗環境 2.7.數據架構與深度學習訓練架構 2.8.資料增強與影像預處理 2.9.卷積神經網路 2.10.算法訓練 2.11.超參數調整 2.12.遷移學習 2.13.分層k折交叉驗證 2.14.性能指標 2.15.Grad-CAM 2.16.ROC與AUC 第3章、結果 3.1.資料增強與影像預處理結果 3.2.影像Histogram分析 3.3.直方圖比較 3.4.超參數實驗結果 3.5.類神經網路性能表現 3.6.ROC與AUC 3.7.混淆矩陣 3.8.Grad-CAM visualization 第4章、討論 第5章、結論

    1.Bushby, K., et al., Diagnosis and management of Duchenne muscular dystrophy, part 1: diagnosis, and pharmacological and psychosocial management. The Lancet Neurology, 2010. 9(1): p. 77-93.
    2.Suh, M.R., et al., Multiplex ligation-dependent probe amplification in X-linked recessive muscular dystrophy in Korean subjects. Yonsei medical journal, 2017. 58(3): p. 613-618.
    3.Okubo, M., et al., Comprehensive analysis for genetic diagnosis of Dystrophinopathies in Japan. Orphanet journal of rare diseases, 2017. 12(1): p. 149.
    4.Ojala, T., M. Pietikainen, and D. Harwood. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. in Proceedings of 12th International Conference on Pattern Recognition. 1994. IEEE.
    5.Ojala, T., M. Pietikäinen, and D. Harwood, A comparative study of texture measures with classification based on featured distributions. Pattern recognition, 1996. 29(1): p. 51-59.
    6.Rubinstein, R., A.M. Bruckstein, and M. Elad, Dictionaries for sparse representation modeling. Proceedings of the IEEE, 2010. 98(6): p. 1045-1057.
    7.Gulshan, V., et al., Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 2016. 316(22): p. 2402-2410.
    8.Esteva, A., et al., Dermatologist-level classification of skin cancer with deep neural networks. nature, 2017. 542(7639): p. 115-118.
    9.Kermany, D.S., et al., Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 2018. 172(5): p. 1122-1131. e9.
    10.栗濬傑, 超音波於脂肪分布診斷與治療之應用. 2014.
    11.Wild, J. and D. Neal, Use of high-frequency ultrasonic waves for detecting changes of texture in living tissues. The Lancet, 1951. 257(6656): p. 655-657.
    12.Pillen, S. and N. van Alfen, Skeletal muscle ultrasound. Neurological research, 2011. 33(10): p. 1016-1024.
    13.Heckmatt, J., N. Pier, and V. Dubowitz, Measurement of quadriceps muscle thickness and subcutaneous tissue thickness in normal children by real‐time ultrasound imaging. Journal of clinical ultrasound, 1988. 16(3): p. 171-176.
    14.Reeves, N.D., C.N. Maganaris, and M.V. Narici, Ultrasonographic assessment of human skeletal muscle size. European journal of applied physiology, 2004. 91(1): p. 116-118.
    15.Heckmatt, J., V. Dubowitz, and S. Leeman, Detection of pathological change in dystrophic muscle with B-scan ultrasound imaging. The Lancet, 1980. 315(8183): p. 1389-1390.
    16.Pillen, S., et al., Skeletal muscle ultrasound: correlation between fibrous tissue and echo intensity. Ultrasound in medicine & biology, 2009. 35(3): p. 443-446.
    17.Reimers, C.D., et al., Muscular ultrasound in idiopathic inflammatory myopathies of adults. Journal of the neurological sciences, 1993. 116(1): p. 82-92.
    18.Heckmatt, J., N. Pier, and V. Dubowitz, Real‐time ultrasound imaging of muscles. Muscle & Nerve: Official Journal of the American Association of Electrodiagnostic Medicine, 1988. 11(1): p. 56-65.
    19.Zuberi, S., et al., Muscle ultrasound in the assessment of suspected neuromuscular disease in childhood. Neuromuscular Disorders, 1999. 9(4): p. 203-207.
    20.Tan, C., et al. A survey on deep transfer learning. in International conference on artificial neural networks. 2018. Springer.
    21.Sra, S. and I.S. Dhillon. Generalized nonnegative matrix approximations with Bregman divergences. in Advances in neural information processing systems. 2006.
    22.Kwak, N., C. Kim, and H. Kim, Dimensionality reduction based on ICA for regression problems. Neurocomputing, 2008. 71(13-15): p. 2596-2603.
    23.Weng, W.-C., et al., Evaluation of muscular changes by ultrasound Nakagami imaging in Duchenne muscular dystrophy. Scientific Reports, 2017. 7(1): p. 1-11.
    24.Lind, E. and Ä. Pantigoso Velasquez, A performance comparison between CPU and GPU in TensorFlow. 2019.
    25.Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.
    26.LeCun, Y., et al., Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. 86(11): p. 2278-2324.
    27.Shin, H.-C., et al., Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 2016. 35(5): p. 1285-1298.
    28.Alom, M.Z., et al., The history began from alexnet: A comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164, 2018.
    29.Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
    30.Nasr, G.E., E. Badr, and C. Joun. Cross entropy error function in neural networks: Forecasting gasoline demand. in FLAIRS conference. 2002.
    31.Boureau, Y.-L., et al. Learning mid-level features for recognition. in 2010 IEEE computer society conference on computer vision and pattern recognition. 2010. IEEE.
    32.Yu, D., et al. Mixed pooling for convolutional neural networks. in International conference on rough sets and knowledge technology. 2014. Springer.
    33.Deng, J., et al. Imagenet: A large-scale hierarchical image database. in 2009 IEEE conference on computer vision and pattern recognition. 2009. Ieee.
    34.Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. in Ijcai. 1995. Montreal, Canada.
    35.Zhou, B., et al. Learning deep features for discriminative localization. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    36.Selvaraju, R.R., et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. in Proceedings of the IEEE international conference on computer vision. 2017.
    37.Wibmer, A., et al., Haralick texture analysis of prostate MRI: utility for differentiating non-cancerous prostate from prostate cancer and differentiating prostate cancers with different Gleason scores. European radiology, 2015. 25(10): p. 2840-2850.

    無法下載圖示 全文公開日期 2025/08/10 (校內網路)
    全文公開日期 2025/08/10 (校外網路)
    全文公開日期 2025/08/10 (國家圖書館:臺灣博碩士論文系統)
    QR CODE