簡易檢索 / 詳目顯示

研究生: 劉騏瑞
Chi-Jui Liu
論文名稱: 基於小型數據集使用遷移學習於胸腔X光影像全自動檢測肋骨骨折系統之研究
An automatic rib fracture system for chest X-ray images using transfer learning based on a small dataset
指導教授: 郭中豐
Chung-Feng Kuo
口試委員: 郭中豐
Chung-Feng Kuo
黃昌群
Chang-Chiun Huang
張大鵬
Ta-Peng Chang
趙新民
Shin-Min Chao
學位類別: 碩士
Master
系所名稱: 工程學院 - 材料科學與工程系
Department of Materials Science and Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 111
中文關鍵詞: 肋骨骨折深度學習遷移學習限制對比度自適應直方圖均衡(CLAHE)隨機圖像裁剪和修補(RICAP)YOLOv7Faster RCNNDETRX 光
外文關鍵詞: Rib fracture, Deep learning, Transfer learning, CLAHE, RICAP, YOLOv7, Faster RCNN, DETR, X-ray
相關次數: 點閱:444下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

肋骨骨折是一種常見的骨骼損傷,需要及時治療。現今胸腔X光影像仍是診斷肋骨骨折常用的診斷方式,若能儘早診斷出患者是否有骨折及骨折的位置,對於後續的治療有很重要的影響。與電腦斷層掃描(CT)相比,胸部X 光由於方便且成本相對較低且減少病患暴露在高劑量輻射的風險。同時,我們也參考放射科醫生的意見,根據臨床診斷的經驗,根據骨折位置與骨折程度對數據集進行分析,分別為後側(Posterior)、側面(Lateral)、前側(Anterior)以及無移位(d =0)、小於2 厘米移位(d < 2 mm)、大於2 厘米小於10 厘米移位(2 mm< d < 10 mm)、大於10 厘米移位(d >10 mm)。其中d 為位移量(displacement),位移的大小皆由經驗豐富的放射科醫生進行量測。推理模型最終目標是同時具備有較好的性能且在無移位上有更高的召回率,其原因在於臨床診斷上無移位的類別上有更高的漏診率。
在本文中,我們改良YOLOv7 的路徑並提出新的混合模型。我們在頭部網路輸入到偵測區塊前,添加了Bottleneck of Transformer(BoT)的shortcut 路徑與原本頭部網路的輸出作相加,如此,偵測區塊可以同時保留原本YOLOv7 網路的特徵圖與Transformer 塊的特徵輸出,同時在影像輸入前,使用邊緣偵測算法,提取影像邊緣圖並添加到原影像新的通道中,使輸入影像強化邊緣特徵。作為對比,我們選用了三種強大且具有代表性的目標檢測模型, 分別為YOLOv7、Faster RCNN、DETR 並且採用遷移學習在小型數據集進行訓練和評估來檢測肋骨骨折,該數據集是從三軍總醫院(Tri-Service General Hospital)收集的包含正面和斜位胸部 X 影像,開發的模型可以在胸腔X 光影像的兩個投影視圖中檢測肋骨骨折。本研究中,由於所使用僅只有456 張影像的小型數據集。因此,為了增加訓練數據集的多樣性,我們使用了RICAP 的數據增強方法,同時我們根據肋骨骨折的特徵在RICAP 方法上做了一些調整。為了讓肋骨邊緣更清楚,我們採用了限制對比度自適應直方圖均衡。研究表明,新的混合模型、YOLOv7、Faster RCNN、DETR 分別在驗證集上AUC 達到0.850、0.823、0.751、 0.740,以新的混合模型性能最為優異。因此我們使用新的混合模型分別對無位移、小於2mm 的位移大於2mm、小於10mm 的位移、大於10mm 的位移獲得了召回率0.623、0.821、0.878 及0.933,所有標記獲得了0.840 的召回率。其偵測模型能夠即時給予臨床醫師及放射科醫師第二建議,提高X 光影像判讀的效率與正確性。


Rib fracture is a common skeletal injury that requires timely treatment. At present, the chest X-ray imaging is still a popular diagnostic method for diagnosing rib fractures. Early diagnosis of whether a patient has a fracture and the location of the fracture is crucial for subsequent treatment. Compared with computed tomography (CT), the chest X-ray is convenient and relatively low in cost, and can reduce the potential risk of the patients being exposed to high dose radiation. Based on the opinions of radiologists, and dataset analysis according to the experience in clinical diagnosis, this study took measures at the fracture positions and fracture degrees, which are Posterior, Lateral, Anterior and no displacement (d = 0), displacement less than 2 mm (d < 2 mm), displacement greater than 2 mm and less than 10 mm (2 mm < d < 10 mm), and displacement greater than 10 cm (d >10 mm), and the displacement is measured by an experienced radiologist. The objective of the inference model is to achieve better performance and a higher recall rate in no-displacement, because there is a higher misdiagnosis rate in the no-displacement category in clinical diagnosis.
In this study, we improve YOLOv7 and propose a new hybrid model. Before the head network is input to the detection block, the shortcut path of Bottleneck of Transformer (BoT) is added to the output of the original head network, so that the detection block can retain the output of the Transformer block and original YOLOv7 network at the same time. Before the image is input, the edge detection algorithm is used to extract the image edge map and add it to the new channel of the original image to strengthen the edge features of the input image. For comparison, we selected three powerful and representative target detection models, which are YOLOv7, Faster RCNN, and DETR. The transfer learning was used in a small dataset for training and evaluation to detect rib fractures. The
dataset was collected from Tri-Service General Hospital, containing frontal and oblique chest X-ray images. The developed model was feasible to detect rib fractures in two projection views of chest X-ray images. As the small dataset used in this study has only 456 images, in order to diversify the training dataset, RICAP method was adjusted to some extent according to the features of rib fractures. To make the rib edges clearer, the Contrast Limited Adaptive Histogram Equalization was used. The experimental results show that the AUC of our method, YOLOv7, Faster RCNN and DETR in the validation set is 0.850, 0.823, 0.751 and 0.740 respectively, and our method has the best performance. Therefore, our method’s recall rates are 0.623, 0.821, 0.878 and 0.933 in these category of no displacement, displacement less than 2mm, displacement greater than 2mm and less than 10mm, and displacement greater than 10mm, and the recall rate of all labels is 0.840. Our model can provide second suggestions for the clinicians and radiologists in real time, improving the efficiency and correctness of X-ray image interpretation.

摘要 I ABSTRACT III 致謝 V 目錄 VI 圖目錄 X 表目錄 XIII 第一章 緒論 1 1.1 研究背景與動機 1 1.2 文獻回顧 3 1.2.1 臨床胸腔肋骨骨折評估方法 4 1.2.2 基於深度學習於醫學影像應用 5 1.2.3 基於X光影像於胸腔肋骨骨折診斷 8 1.3 研究目的 8 1.4 論文架構 11 第二章 相關診斷背景知識介紹 13 2.1 X光攝影原理 13 2.2 胸腔X光影像診斷胸腔肋骨骨折 14 2.3 電腦斷層掃瞄診斷胸腔肋骨骨折 16 2.4 胸腔肋骨骨折的嚴重性 17 第三章 研究方法與理論 19 3.1 深度學習 19 3.2 卷積神經網路 20 3.2.1 卷積層 22 3.2.2 激勵函數 23 3.2.3 池化層 24 3.2.4 Dropout層 26 3.2.5 殘差塊 27 3.3 圖像增強 28 3.3.1 限制對比度自適應直方圖均衡化 28 3.4 數據增強 31 第四章 實驗與驗證 36 4.1 數據集 37 4.4.1 CXR 數據集分析 40 4.2 影像標準化 45 4.3 圖像增強 47 4.3.1 限制對比度自適應直方圖均衡 47 4.4 數據增強 47 4.4.1 隨機旋轉 48 4.4.2 隨機縮放 50 4.4.3 隨機裁剪與填補 51 4.4.4 隨機水平翻轉 52 4.5 深度學習模型訓練 53 4.5.1 遷移學習 54 4.5.2 YOLOv7模型訓練 55 4.5.3 Faster RCNN模型訓練 57 4.5.4 DETR模型訓練 59 4.5.5 改良YOLOv7的混合模型 62 4.6 評估指標 67 4.6.1 混淆矩陣 67 4.6.2 平均精確度(AP) 69 4.6.3 曲線下面積(AUC) 70 4.6.4 F1 分數(F1 score) 70 第五章 實驗結果與分析 72 5.1 目標檢測模型訓練 72 5.1.1 YOLOv7模型訓練 72 5.1.2 Faster RCNN模型訓練 72 5.1.3 DETR 模型訓練 73 5.1.4 改良YOLOv7的混合模型訓練 73 5.2 肋骨骨折目標檢測效能分析 73 5.3 預測標籤與原始影像融合結果 84 5.4 預測結果熱力圖 85 第六章 討論與結論 87 參考文獻 90

Miller LA. Chest wall, lung, and pleural space trauma. Radiol Clin North Am. 2006; 44(2): 213-224.
Hu J, Zheng ZF, Wang SH, Si DL, Yuan YQ, Gao BL. Missed rib fractures on initial chest CT in trauma patients: time patterns, clinical and forensic significance. Eur Radiol. 2021; 31(4): 2332-2339.
Zhou QQ, Tang W, Wang J, Hu ZC, Xia ZY, Zhang R, Fan X, Yong W, Yin X, Zhang B, Zhang H. Automatic detection and classification of rib fractures based on patients' CT images and clinical information via convolutional neural network. Eur Radiol. 2021; 31(6): 3815-3825.
Talbot BS, Gange CP Jr, Chaturvedi A, Klionsky N, Hobbs SK, Chaturvedi A. Traumatic rib injury: patterns, imaging pitfalls, complications, and treatment. Radiographics. 2017; 37(2): 628-651.
Urbaneja A, De Verbizier J, Formery AS, Tobon-Gomez C, Nace L, Blum A, Gondim Teixeira PA. Automatic rib cage unfolding with CT cylindrical projection reformat in polytraumatized patients for rib fracture detection and characterization: Feasibility and clinical application. Eur J Radiol. 2019; 110: 121-127.
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553): 436–444.
Zhao W, Yang J, Sun Y, Li C, Wu W, Jin L, Yang Z, Ni B, Gao P, Wang P, Hua Y, Li M. 3D deep learning from CT scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Res. 2018; 78(24): 6881-6889.
Zhang HT, Zhang JS, Zhang HH, Nan YD, Zhao Y, Fu EQ, Xie YH, Liu W, Li WP, Zhang HJ, Jiang H, Li CM, Li YY, Ma RN, Dang SK, Gao BB, Zhang XJ, Zhang T. Automated detection and quantification of COVID-19 pneumonia: CT imaging analysis by a deep learning-based software. Eur J Nucl Med Mol Imaging. 2020; 47(11): 2525-2532.
Majkowska A, Mittal S, Steiner DF, Reicher JJ, McKinney SM, Duggan GE, Eswaran K, Cameron Chen PH, Liu Y, Kalidindi SR, Ding A, Corrado GS, Tse D, Shetty S. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology. 2020; 294(2): 421-431.
Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, Mahajan V, Rao P, Warier P. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet. 2018; 392(10162): 2388-2396.
Li XX, Gu YF, Dvornek N, Staib LH, Ventola P, Duncan JS. Multi-site fMRI analysis using privacy-preserving federated learning and domain adaptation: ABIDE results. Med Image Anal. 2020; 65: 101765.
Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019; 25(1): 44-56.
NHS. The Topol Review: preparing the healthcare workforce to deliver the digital future. 2019, https://www.hee.nhs.uk/our-work/topol-review. Accessed : July 15, 2023.
Weikert T, Noordtzij LA, Bremerich J, Stieltjes B, Parmar V, Cyriac J, Sommer G, Sauter AW. Assessment of a deep learning algorithm for the detection of rib fractures on whole-body trauma computed tomography. Korean J Radiol. 2020; 21(7): 891-899.
Yao, L., Guan, X., Song, X. et al. Rib fracture detection system based on deep learning. Sci Rep. 2021; 11: 23513.
Jin L, Yang J, Kuang K, Ni B, Gao Y, Sun Y, Gao P, Ma W, Tan M, Kang H, Chen J, Li M. Deep-learning-assisted detection and segmentation of rib fractures from CT scans: development and validation of FracNet. EBioMedicine. 2020; 62: 103106.
Meng XH, Wu DJ, Wang Z, Ma XL, Dong XM, Liu AE, Chen L. A fully automated rib fracture detection system on chest CT images and its impact on radiologist performance. Skeletal Radiol. 2021; 50(9): 1821-1828.
Lindsey R, Daluiski A, Chopra S, Lachapelle A, Mozer M, Sicular S, Hanel D, Gardner M, Gupta A, Hotchkiss R, Potter H. Deep neural network improves fracture detection by clinicians. InProceedings of the National Academy of Sciences. 2018; 115(45): 11591-11596.
Yahalomi E, Chernofsky M, Werman M. Detection of distal radius fractures trained by a small set of X-ray images and Faster R-CNN. Intelligent Computing: InProceedings of the 2019 Computing Conference. 2019; 1: 971-981.
Thian YL, Li Y, Jagmohan P, Sia D, Chan VEY, Tan RT. Convolutional neural networks for automated fracture detection and localization on wrist radiographs. Radiol Artif Intell. 2019; 1(1): e180001.
Kitamura G, Chung CY, Moore BE 2nd. Ankle fracture detection utilizing a convolutional neural network ensemble implemented with a small sample, de novo training, and multiview incorporation. J Digit Imaging. 2019; 32(4): 672-677.
Dhar T, Dey N, Borra S, Sherratt RS. Challenges of deep learning in medical image analysis—improving explainability and trust. IEEE Transactions on Technology and Society. 2023; 4(1): 68-75.
Razzak MI, Naz S, Zaib A. Deep learning for medical image processing: overview, challenges and the future. Classification in BioApps: Automation of Decision Making. 2018: 323-350.
Kim DH, MacKinnon T. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clinical radiology. 2018; 73(5): 439-445.
Tsai HC, Qu YY, Lin CH, Lu NH, Liu KY, Wang JF. Automatic rib fracture detection and localization from frontal and oblique chest x-rays. In2022 10th International Conference on Orange Technology (ICOT). 2022; 1-4.
Kong Q, Wu Y, Yuan C, Wang Y. Ct-cad: context-aware transformers for end-to-end chest abnormality detection on x-rays. In2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 2021; 1385-1388.
Liu J, Lian J, Yu Y. Chestx-det10: chest x-ray dataset on detection of thoracic abnormalities. arXiv preprint arXiv:2006.10550. 2020.
Gao Y, Liu H, Jiang L, Yang C, Yin X, Coatrieux JL, Chen Y. CCE-Net: A rib fracture diagnosis network based on contralateral, contextual, and edge enhanced modules. Biomedical Signal Processing and Control. 2022; 75: 103620.
Wu J, Liu N, Li X, Fan Q, Li Z, Shang J, Wang F, Chen B, Shen Y, Cao P, Liu Z. Convolutional neural network for detecting rib fractures on chest radiographs: a feasibility study. BMC Medical Imaging. 2023; 23(1): 1-2.
van Laarhoven JJEM, Hietbrink F, Ferree S, Gunning AC, Houwert RM, Verleisdonk EMM, Leenen LPH. Associated thoracic injury in patients with a clavicle fracture: a retrospective analysis of 1461 polytrauma patients. Eur J Trauma Emerg Surg. 2019; 45(1): 59-63.
Pinto A, Berritto D, Russo A, Riccitiello F, Caruso M, Belfiore MP, Papapietro VR, Carotti M, Pinto F, Giovagnoni A, Romano L, Grassi R. Traumatic fractures in adults: missed diagnosis on plain radiographs in the emergency department. Acta Biomed. 2018; 89(1-S): 111-123.
Ziegler DW, Agarwal NN. The morbidity and mortality of rib fractures. J Trauma. 1994; 37(6): 975-979.
Liew C. The future of radiology augmented with artificial intelligence: a strategy for success. Eur J Radiol. 2018; 102: 152-156.
Beyaz S. A brief history of artificial intelligence and robotic surgery in orthopedics & traumatology and future expectations. Jt Dis Relat Surg. 2020; 31(3): 653-655.
LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998; 86(11): 2278-2324.
He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. InComputer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14. 2016; 630-645.
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. InProceedings of the IEEE conference on computer vision and pattern recognition. 2017; 4700-4708.
Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. InProceedings of the European conference on computer vision (ECCV). 2018; 801-818.
Pizer SM. Contrast-limited adaptive histogram equalization: Speed and effectiveness. InProceedings of the first conference on visualization in biomedical computing, Atlanta, Georgia. 1990; 337: 2.
Takahashi R, Matsubara T, Uehara K. Data augmentation using random image cropping and patching for deep CNNs. IEEE Transactions on Circuits and Systems for Video Technology. 2019; 30(9): 2917-2931.
Wang CY, Bochkovskiy A, Liao HY. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023; 7464-7475.
Ren S, He K, Girshick R, Sun J. Faster r-cnn: towards real-time object detection with region proposal networks. Advances in neural information processing systems. 2015; 28.
Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S. End-to-end object detection with transformers. InEuropean conference on computer vision. 2020; 213-229.
Bochkovskiy A, Wang CY, Liao HY. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. 2020.
Lin TY, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. InProceedings of the IEEE conference on computer vision and pattern recognition. 2017; 2117-2125.
Liu S, Qi L, Qin H, Shi J, Jia J. Path aggregation network for instance segmentation. InProceedings of the IEEE conference on computer vision and pattern recognition. 2018; 8759-8768.
Chen Q, Wang Y, Yang T, Zhang X, Cheng J, Sun J. You only look one-level feature. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021; 13039-13048.
Ding X, Zhang X, Ma N, Han J, Ding G, Sun J. Repvgg: Making vgg-style convnets great again. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021; 13733-13742.
Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D. Distance-IoU loss: Faster and better learning for bounding box regression. InProceedings of the AAAI conference on artificial intelligence. 2020; 34(7) : 12993-13000.
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. 2020.
Srinivas A, Lin TY, Parmar N, Shlens J, Abbeel P, Vaswani A. bottleneck transformers for visual recognition. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021; 16519-16529.
Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (voc) challenge. International journal of computer vision. 2010; 88: 303-38.
Kingma DP, Ba J. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014.
Zheng Z, Wang P, Ren D, Liu W, Ye R, Hu Q, Zuo W. Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE transactions on cybernetics. 2021; 52(8): 8574-8586.

無法下載圖示 全文公開日期 2025/08/30 (校內網路)
全文公開日期 2025/08/30 (校外網路)
全文公開日期 2025/08/30 (國家圖書館:臺灣博碩士論文系統)
QR CODE