簡易檢索 / 詳目顯示

研究生: 廖育樞
Yu-Shu Liao
論文名稱: 應用深度學習與3D列印技術於頭部電腦斷層掃描全自動分割辨識鼻竇與鼻竇黏膜發炎系統之開發及提出臨床診斷全新慢性鼻竇炎之評估標準
Application of deep learning and 3D printing technology in head computed tomography for development of a fully automated segmentation system to identify sinus and sinus mucosal inflammation and proposal of a new clinical diagnostic criteria for chronic sinusitis
指導教授: 郭中豐
Chung-Feng Kuo
口試委員: 黃昌群
Chang-Chiun Huang
劉紹正
Shao-Cheng Liu
蘇德利
Te-Li Su
邱錦勳
Chin-Hsun Chiu
學位類別: 碩士
Master
系所名稱: 工程學院 - 材料科學與工程系
Department of Materials Science and Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 中文
論文頁數: 132
中文關鍵詞: 慢性鼻竇炎Lund-Mackay分級系統深度學習卷積神經網路語義分割3D列印約登指數
外文關鍵詞: Chronic sinusitis, Lund-Mackay grading system, Deep learning, Convolutional neural network, Semantic segmentation, 3D printing, Yoden index
相關次數: 點閱:170下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 慢性鼻竇炎是一種非常流行的氣道疾病,嚴重影響患者的生活品質。慢性鼻竇炎大多以病患自己主觀的症狀陳述做為臨床判斷而並無實際客觀數據與標準化。電腦斷層掃瞄因能清楚觀測鼻竇黏膜發炎造成鼻竇中出現灰色混濁區域,故使用電腦斷層掃描評估鼻竇黏膜發炎程度目前被認為是客觀診斷慢性鼻竇炎的方式。慢性鼻竇炎有許多用於電腦斷層掃描分析的分級系統,本文以目前美國耳鼻喉頭頸外科醫學會使用Lund-Mackay分級系統做為鼻竇炎之研究標準,因這種分級系統依十個不同的鼻竇(含左右上頷竇、前篩竇、後篩竇、額竇與蝶竇)的黏膜發炎程度個別計分,當黏膜發炎程度不佔電腦斷層掃描影像時記為0分,當黏膜發炎程度佔100%時記為2分,所有其他程度都被記為1分,故診斷者能在不費時的情況下評斷較一致性,相較於其他分級系統簡單通用,且診斷者不需經過專業的醫療放射學訓練即可進行。但有研究指出該系統缺乏足夠的分級來觀測黏膜發炎程度的量化,因此出現研究者增加分級的數量以改進此系統。而這些改進的分級系統中,計算各鼻竇黏膜發炎比例的分級系統被證實為具更佳評估慢性鼻竇炎之方法,與慢性鼻竇炎病患之症狀嚴重程度更具相關性,但使用視覺觀測鼻竇黏膜發炎比例會失去一般Lund-Mackay分級系統中診斷者間評估的一致性,使分級系統趨於主觀,且目前計算鼻竇黏膜發炎程度仍需手動框選各種不同鼻竇之輪廓再加以計分,實施費時。因此,若能將評估系統全自動化實踐,將成為日後衡量慢性鼻竇炎嚴重程度之黃金標準。
      本研究為改進評估慢行鼻竇炎鼻竇黏膜發炎程度之分級系統,開發出一套全自動化之系統,能適用於目前Lund-Mackay及其他改進分級系統之計分,快速計算頭部電腦斷層掃描中左右各五種不同鼻竇與其黏膜發炎比例,給予醫師臨床診斷之建議。
      本研究分為三部分,第一部份利用深度學習卷積神經網路中的語義分割方法全自動分割及辨識出各種鼻竇區域之輪廓,第二部份為各鼻竇之體積重建與驗證,第三部份為計算各鼻竇的黏膜發炎比例之分級系統給予是否進行手術的建議指標。
    一、 本研究使用三軍總醫院50組病患電腦斷層掃描資料,與醫師協同確實標註影像中上頷竇、前篩竇、後篩竇、額竇與蝶竇之區域輪廓,接著使用專為醫學影像分割而開發出之語義分割模型UNet對所使用之資料集進行訓練,再結合新穎之深度學習方法對語義分割模型進行改進,含融合高解析度特徵圖跳躍連接使鼻竇輪廓分割細節更完整、自我學習特徵圖間的關係凸顯出重要的特徵圖、深度可分卷積減少運算與參數量、殘差連接穩定模型訓練與半監督式學習對未標註資料進行訓練與驗證。並依醫學影像分割判定之指標Dice評估模型之效能,得Dice可達到91.57%,代表能對影像做精準的分割與辨識,且預測100組病患資料平均每一組僅需9.7285秒,相比醫師手動框選花上至少一小時,大幅提升效率。此外還請到三位三軍總醫院的耳鼻喉科醫師,對語義分割模型所預測之額外100組不同病患電腦斷層掃描資料進行準確度計分,這些電腦斷層掃瞄資料含左右上頷竇、前篩竇、後篩竇、額竇與蝶竇,得平均準確度高達94.6%,表示語義分割模型能精準分割辨識出各種不同的鼻竇區域輪廓,即使在難以區分的鼻竇口也能正確分割,僅在少量前篩竇與額竇相連時影像切片未出現邊界區域而造成誤判,證實本系統同時有快速且低誤判之優點有助於臨床給予醫師建議。
    二、 根據語義分割模型所分割辨識的上頷竇、前篩竇、後篩竇、額竇、蝶竇與其黏膜發炎之輪廓使用Marching cube進行三維體積重建,以3D列印技術列印出三個不同病患之鼻竇樣本,驗證實際體積與系統所得體積之計算誤差,得誤差平均為1.44%,驗證本研究方法求得之各鼻竇與其黏膜發炎體積的可靠度。
    三、 依上頷竇、前篩竇、後篩竇、額竇、蝶竇與其黏膜發炎比例所計算出之全新Lund-Mackay分級系統作為是否進行手術的建議指標,經由約登指數求出最佳閾值評估是否可以有效診斷手術之必要性。得當分數為6.92時為最佳閾值,靈敏度、特異度與接收者操作特性曲線下面積分別為96.0%、92.6%與0.981,與一般Lund-Mackay分級系統靈敏度、特異度與接收者操作特性曲線下面積的84.0%、86.8%與0.857相比能更精準提供臨床偵測與診斷。證實依各鼻竇實際黏膜發炎比例評估慢性鼻竇炎能更有效了解病患之症狀程度,且能同時快速且精準的提供給醫師手術診斷的準確性參考依據。


    Chronic sinusitis is a very popular airway disease that seriously affects the quality of life of patients. Most of the clinical judgments of chronic sinusitis are based on patients’ subjective statements of symptoms without actual objective data and standardization. The use of computed tomography (CT) to assess the degree of sinus mucosal inflammation is now considered to be an objective way to diagnose chronic sinusitis because of its ability to clearly illustrate the gray cloudy areas in the sinuses caused by sinus mucosal inflammation. There are many grading systems for CT analysis of chronic sinusitis. In this article, the Lund-Mackay grading system, currently used by the American Academy of Otorhinolaryngology Head and Neck Surgery, is used as the standard for sinusitis study because it is based on the degree of mucosal inflammation in 10 different sinuses (including the right and left maxillary, sinuses ethmoidei anteriores, sinuses ethmoidei posteriores, frontal sinus, and sphenoid sinus) and is scored individually. When the degree of mucosal inflammation does not account for the CT image, it is scored as 0. When the degree of mucosal inflammation accounts for 100% of the image, it is scored as 2 with all other degrees being scored as 1. In this way, the diagnostician can make a more consistent assessment in a less time-consuming manner. Compared to other grading systems, it is simple and universal and can be performed by diagnosticians without specialized medical radiology training. However, it has been described as lacking sufficient grading to quantify the degree of mucosal inflammation, so researchers have increased the number of grades to improve the system. Among these improved grading systems, the one which calculates the proportion of mucosal inflammation in each sinus has been proven to be a better method for assessing chronic sinusitis and more relevant to the severity of symptoms in patients with chronic sinusitis. However, the use of visual observation on the proportion of mucosal inflammation in the sinuses loses the consistency among diagnosticians in the general Lund-Mackay grading system, making the grading system more subjective. In addition, the current calculation of the degree of sinus mucosal inflammation still requires manual selection of various sinus profiles and then scoring, which is time-consuming. Therefore, if the assessment system is fully automated, it will become the gold standard for measuring the severity of chronic sinusitis in the future.
    In this study, a fully automated system was developed to improve the grading system for assessing the degree of sinus mucosal inflammation in chronic sinusitis, which can be applied to the current Lund-Mackay and other modified grading systems to quickly calculate the proportion of mucosal inflammation in each of the five different sinuses on the right and left side of the head computed tomography scan, thus giving clinical diagnosis recommendations to physicians.
    This study was divided into three parts. The first part was fully automatic segmentation and identification of sinus regions with semantic segmentation in deep learning convolutional neural networks. The second part was volume reconstruction and validation of each sinus. The third part was a grading system to calculate the proportion of mucosal inflammation in each sinus to recommend whether or not to perform surgery.

    1. In this study, we used 50 sets of patient CT data from Tri-Service General Hospital and collaborated with physicians to identify the contours of maxillary, sinuses ethmoidei anteriores, sinuses ethmoidei posteriores, frontal sinus, and sphenoid sinus in the images. The data set was then trained with UNet, a semantic segmentation model developed specifically for medical image segmentation. The semantic segmentation model was improved with a novel deep learning approach, including fusion of high-resolution feature map skip connections for more complete sinus contour segmentation, self-learning relationships between feature maps for highlighting important features, depthwise separable convolution for reducing the number of operations and parameters, residual connections for stabilizing the model training and semi-supervised learning to train and validate the unlabeled data. The training and validation are performed on unlabeled data. The performance of the model was evaluated by Dice evaluation model, a pointer for medical image segmentation, and Dice reached 91.57%, which means it can accurately segment and identify images. Meanwhile, it only takes 9.7285 seconds to predict 100 sets of patient data on average. Compared with at least one hour for physicians to manually frame the images, it is a significant improvement in efficiency. In addition, three otolaryngologists from Tri-Service General Hospital were invited to score the accuracy of the semantic segmentation model for an additional 100 sets of patient computed tomography scans. An average accuracy of 94.6% was obtained for these scans including the right and left maxillary, sinuses ethmoidei anteriores, sinuses ethmoidei posteriores, frontal sinus, and sphenoid sinus. This result indicated that the semantic segmentation model could accurately segment the contours of different sinus areas, even in sinus openings which are difficult to distinguish. Only in a small number of cases where the sinuses ethmoidei anteriores were connected to the frontal sinus, the image slices did not show the border area, which is a cause of an erroneous judgment. This has proven that the system has the advantage of being both fast and with less erroneous judgment, which is useful for clinical advices to physicians.
    2. The 3D volume reconstruction of the maxillary, sinuses ethmoidei anteriores, sinuses ethmoidei posteriores, frontal sinus, and sphenoid sinus, and their mucosal inflammation contours identified by the semantic segmentation model was performed by Marching cube. Moreover, three sinus samples from three different patients were printed by 3D printing technology to verify the error of the actual volume and the volume calculated by the system. The average error was 1.44%, which verified the reliability of the volume of each sinus and its mucosal inflammation calculated by this study approach.
    3. The new Lund-Mackay grading system, based on the ratio of the maxillary, sinuses ethmoidei anteriores, sinuses ethmoidei posteriores, frontal sinus, and sphenoid sinus to their mucosal inflammation, was used as a recommendation for surgery. The optimal threshold was derived from the Youden index to effectively assess the necessity for a surgery and obtained when the score was 6.92. The sensitivity, specificity, and the area under curve (AUC) of the receiver operating characteristic curve (ROC) were 96.0%, 92.6%, and 0.981, respectively, which provided a more accurate clinical detection and diagnosis compared with the sensitivity, specificity, and AUC of 84.0%, 86.8%, and 0.857 of the general Lund-Mackay grading system. It is confirmed that the assessment of chronic sinusitis according to the actual proportion of mucosal inflammation in each sinus is more effective in understanding the symptom level of patients and can also provide physicians with a quick and accurate reference for an accurate diagnosis on surgery necessity.

    摘要 I ABSTRACT V 致謝 X 目錄 XI 圖目錄 XV 表目錄 XIX 第1章 緒論 1 1.1研究背景與動機 1 1.2文獻回顧 3 1.2.1 臨床慢性鼻竇炎評估方法 3 1.2.2 醫學影像分割理論 6 1.2.3鼻竇區域分割 7 1.2.4慢性鼻竇炎鼻竇黏膜發炎程度 8 1.3研究目的 10 1.4論文架構 11 第2章 相關醫學知識介紹 14 2.1鼻竇構造與功能 14 2.2鼻竇感染與鼻竇炎 15 2.3電腦斷層掃瞄評估慢性鼻竇炎方法 16 第3章 研究方法與理論 19 3.1深度學習 19 3.2 卷積神經網路 21 3.2.1 卷積層 22 3.2.2 激勵函數 22 3.3 資料擴增 24 3.3.1 高斯模糊 24 3.3.2 高斯雜訊 25 3.3.3 Mixup 26 3.4 K-FOLD交叉驗證法 27 3.5 形態學 28 3.5.1 侵蝕與膨脹 28 3.5.2 開運算 30 3.5.3 連通標記 31 3.6 MARCHING CUBE三維重建 32 3.7 全新 LUND-MACKAY SCORE計算方法 36 3.8 3D列印技術 37 第4章 實驗與驗證 38 4.1 資料集分配 38 4.2 影像資料擴增 40 4.2.1 高斯模糊 40 4.2.2 高斯雜訊 41 4.2.3 Mixup 42 4.3鼻竇區域輪廓分割 43 4.3.1 損失函數選擇 43 4.3.2 語義分割模型訓練 44 4.3.3 效能評估指標 45 4.3.4 語義分割模型改進 47 4.4 半監督式學習 56 4.4.5 偽標籤生成 56 4.4.6 半監督式學習訓練 58 4.5 體積重建 59 4.5.1 形態學 59 4.5.2 三維重建 62 4.5.3 體積驗證 64 4.6 慢性鼻竇炎嚴重程度計算 68 4.6.1 全新Lund-Mackay score計算 68 4.6.2 診斷評估 70 第5章 實驗結果與分析 75 5.1鼻竇分割效能分析 75 5.2 全新的LUND-MACKAY SCORE計算結果 80 5.3鼻竇體積分析 81 第6章 討論 88 第7章 結論 93 7.1 未來展望 94 參考文獻 96

    [1] Ling FT, Kountakis SE. Important clinical symptoms in patients undergoing functional endoscopic sinus surgery for chronic rhinosinusitis. The laryngoscope. 2007; 117(6): 1090-1093.
    [2] Dykewicz MS, Hamilos DL. Rhinitis and sinusitis. Journal of Allergy and Clinical Immunology. 2010; 125(2): 103-115.
    [3] Hamilos DL. Chronic sinusitis. Journal of allergy and clinical immunology. 2000; 106(2): 213-227.
    [4] Kalogjera L, Baudoin T. Evidence-based treatment of chronic rhinosinusitis. Acta clin croat. 2005; 44(1): 53-58.
    [5] Soler ZM, Smith TL. Quality of life outcomes after functional endoscopic sinus surgery. Otolaryngologic clinics of north america. 2010; 43(3): 605-612.
    [6] Lanza DC, Kennedy DW. Adult rhinosinusitis defined. Otolaryngol head neck surg. 1997; 117(3): 1-7.
    [7] Younis RT, Lazar RH, Bustillo A, Anand VK. Orbital infection as a complication of sinusitis: are diagnostic and treatment trends changing. Ear nose and throat journal. 2002; 81(11): 771-775.
    [8] Younis RT, Anand VK, Childress C. Sinusitis complicated by meningitis: current management. The laryngoscope. 2001; 111(8): 1338-1342.
    [9] Lund VJ, Mackay IS. Staging in rhinosinusitis. Rhinology. 1993 31: 183-183.
    [10] Lund VJ, Kennedy DW. Staging for rhinosinusitis. Otolaryngology-head and neck surgery. 1997; 117(3): 35-40.
    [11] Basu S, Georgalas C, Kumar BN, Desai S. Correlation between symptoms and radiological findings in patients with chronic rhinosinusitis: an evaluation study using the Sinonasal Assessment Questionnaire and Lund-Mackay grading system. European archives of oto-rhino-laryngology and head and neck. 2005; 262(9): 751-754.
    [12] Smith TL, Rhee JS, Loehrl TA, Burzynski ML, Laud PW, Nattinger AB. Objective testing and quality-of-life evaluation in surgical candidates with chronic rhinosinusitis. American journal of rhinology. 2003; 17(6): 351-356.
    [13] Ryan WR, Ramachandra T, Hwang PH. Correlations between symptoms, nasal endoscopy, and in‐office computed tomography in post‐surgical chronic rhinosinusitis patients. The laryngoscope. 2011; 121(3): 674-678.
    [14] Garneau J, Ramirez M, Armato III SG, Sensakovic WF, Ford M K, Poon CS, Pinto JM. Computer‐assisted staging of chronic rhinosinusitis correlates with symptoms. In international forum of allergy & rhinology. 2015; 5(7): 637-642.
    [15] Murata M, Ariji Y, Ohashi Y, Kawai T, Fukuda M, Funakoshi T, Kise Y, Nozawa M, Katsumata A, Fujita H. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral radiol. 2019; 35(3): 301–307.
    [16] Bhattacharyya N. Clinical and symptom criteria for the accurate diagnosis of chronic rhinosinusitis. The laryngoscope. 2006; 116(110): 1-22.
    [17] Hopkins C, Browne JP, Slack R, Lund V, Brown P. The Lund-Mackay staging system for chronic rhinosinusitis: how is it used and what does it predict. Otolaryngology head and neck surgery. 2007; 137(4): 555-561.
    [18] Zinreich SJ. Imaging for staging of rhinosinusitis. Annals of otology rhinology and laryngology. 2004; 113(5): 19-23.
    [19] Meltzer EO, Hamilos DL, Hadley JA, Lanza DC, Marple BF, Nicklas RA, Zinreich SJ. Rhinosinusitis: establishing definitions for clinical research and patient care. Journal of allergy and clinical immunology. 2004; 114(6): 155-212.
    [20] Okushi T, Nakayama T, Morimoto S, Arai C, Omura K, Asaka D, Matsuwaki Y, Yoshikawa M, Moriyama H, Otori N. A modified Lund–Mackay system for radiological evaluation of chronic rhinosinusitis. Auris nasus larynx. 2013; 40(6): 548-553.
    [21] Moghaddasi H, SANEI TM, Jalali AH, Shakiba M. Correlation of Lund-Mackay and SNOT-20 before and after functional endoscopic sinus surgery (FESS): does the baseline data predict the response rate. 2009; 6(4): 207-214.
    [22] Lim S, Ramirez MV, Garneau JC, Ford MK, McKeough K, Ginat D T, Pinto JM. Three‐dimensional image analysis for staging chronic rhinosinusitis. In international forum of allergy & rhinology. 2017; 7(11): 1052-1057.
    [23] Likness MM, Pallanch JF, Sherris DA, Kita H, Mashtare TL, Ponikau JU. Computed tomography scans as an objective measure of disease severity in chronic rhinosinusitis. Otolaryngology head and neck surgery. 2014; 150(2): 305-311.
    [24] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence. 2017; 39(12): 2481-2495.
    [25] Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence. 2017; 40(4): 834-848.
    [26] Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In proceedings of the european conference on computer vision. 2018: 801-818.
    [27] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998; 86(11): 2278-2324.
    [28] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Fei-Fei L. Imagenet large scale visual recognition challenge. International journal of computer vision. 2015; 115(3): 211-252.
    [29] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 3431–3440.
    [30] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In international conference on medical image computing and computer-assisted intervention. 2015: 234-241.
    [31] Zhao H, Shi J, Qi X, Wang X, Jia J. Pyramid scene parsing network. In proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2881-2890.
    [32] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence. 2017; 39(12): 2481-2495.
    [33] Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In proceedings of the european conference on computer vision. 2018: 801-818.
    [34] Ker J, Wang L, Rao J, Lim T. Deep learning applications in medical image analysis. IEEE access. 2017; 6: 9375-9389.
    [35] Razzak MI, Naz S, Zaib A. Deep learning for medical image processing: Overview, challenges and the future. Classification in bioapps. 2018: 323-350.
    [36] Xie Q, Luong MT, Hovy E, Le QV. Self-training with noisy student improves imagenet classification. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10687-10698.
    [37] Tu YH, Du J, Lee CH. Speech enhancement based on teacher–student deep learning using improved speech presence probability for noise-robust speech recognition. IEEE/ACM transactions on audio speech and language pdrocessing. 2019; 27(12): 2080-2091.
    [38] Kumar V, Rao S, Yu L. Noisy student training using body language dataset improves facial expression recognition. In european conference on computer vision. 2020: 756-773.
    [39] Pirner S, Tingelhoff K, Wagner I, Westphal R, Rilk M, Wahl FM, Eichhorn KW. CT-based manual segmentation and evaluation of paranasal sinuses. European archives of oto-rhino-laryngology. 2009; 266(4): 507-518.
    [40] Gomes AF, de Oliveira Gamba T, Yamasaki MC, Groppo FC, Neto FH, de Fátima Possobon R. Development and validation of a formula based on maxillary sinus measurements as a tool for sex estimation: a cone beam computed tomography study. International journal of legal medicine. 2019; 133(4): 1241-1249.
    [41] Kim HG, Lee KM, Kim EJ, Lee SJ. Improvement diagnostic accuracy of sinusitis recognition in paranasal sinus X-ray using multiple deep learning models. Quantitative imaging in medicine and surgery. 2019; 9(6): 942-951.
    [42] Souadih K, Belaid A, Salem DB, Conze PH. Automatic forensic identification using 3D sphenoid sinus segmentation and deep characterization. Medical and biological engineering and computing. 2020; 58(2): 291-306.
    [43] Laura CO, Hofmann P, Drechsler K, Wesarg S. Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks. In 2019 IEEE 16th international symposium on biomedical imaging. 2019: 1154-1157.
    [44] Kim Y, Lee KJ, Sunwoo L, Choi D, Nam CM, Cho J, Kim JH. Deep learning in diagnosis of maxillary sinusitis using conventional radiography. Investigative radiology. 2019; 54(1): 7-15.
    [45] Jung SK, Lim HK, Lee S, Cho Y, Song IS. Deep active learning for automatic segmentation of maxillary sinus lesions using a convolutional neural network. Diagnostics. 2021; 11(4): 688.
    [46] Giacomini G, Pavan ALM, Altemani JMC, Duarte SB, Fortaleza CMCB, Miranda JRDA, De Pina DR. Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus. Plos One. 2018; 13(1): e0190770.
    [47] Chowdhury NI, Smith TL, Chandra RK, Turner JH. Automated classification of osteomeatal complex inflammation on computed tomography using convolutional neural networks.In International forum of allergy and rhinology. 2019; 9(1): 46-52.
    [48] Zinreich SJ, Kennedy DW, Rosenbaum AE, Gayler BW, Kumar AJ, Stammberger H. Paranasal sinuses: CT imaging requirements for endoscopic surgery. Radiology. 1987; 163(3): 769-775.
    [49] Park IH, Song JS, Choi H, Kim TH, Hoon S, Lee SH, Lee HM. Volumetric study in the development of paranasal sinuses by CT imaging in Asian: a pilot study. International journal of pediatric otorhinolaryngology. 2010; 74(12): 1347-1350.
    [50] Smallcollation, "Paranasal Sinus", https://smallcollation.blogspot.com/2013/12/paranasal-sinuses.html (accessed 24 July, 2021).
    [51] Pixta. "Sinusitis. Healthy and inflammation nasal sinus", https://tw.pixtastock.com/illustration/46153370 (accessed 24 July, 2021).
    [52] Stammberger H, Posawetz W. Functional endoscopic sinus surgery. European archives of oto-rhino-laryngology. 1990; 247(2): 63-76.
    [53] Levine HL. Functional endoscopic sinus surgery: evaluation, surgery, and follow‐up of 250 patients. The laryngoscope. 1990; 100(1): 79-84.
    [54] Kennedy DW, Zinreich SJ, Rosenbaum AE, Johns ME. Functional endoscopic sinus surgery: theory and diagnostic evaluation. Archives of otolaryngology. 1985; 111(9): 576-582.
    [55] Wang J, Perez L. The effectiveness of data augmentation in image classification using deep learning. Convolutional neural networks vis. 2017; 11(1): 1-8.
    [56] Hussain Z, Gimenez F, Yi D, Rubin D. Differential data augmentation techniques for medical imaging classification tasks. In AMIA annual symposium proceedings. American medical informatics association. 2017: 979-984.
    [57] Zhang H, Cisse M, Dauphin YN, Lopez-Paz D. Mixup: Beyond empirical risk minimization. arXiv. 2017: 1710.09412.
    [58] Kim DH, MacKinnon T. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clinical radiology. 2018; 73(5): 439-445.
    [59] Haralick RM, Sternberg SR, Zhuang X. Image analysis using mathematical morphology. IEEE transactions on pattern analysis and machine intelligence 1987; 9(4): 532-550.
    [60] Chang F, Chen CJ, Lu CJ. A linear-time component-labeling algorithm using contour tracing technique. Computer vision and image understanding. 2004; 93(2): 206-220.
    [61] Lorensen WE, Cline HE. Marching cubes: A high resolution 3D surface construction algorithm. In ACM siggraph computer graphics 1987; 21(4): 163-169.
    [62] Prendergast ME, Burdick JA, Recent advances in enabling technologies in 3D printing for precision medicine. Advanced materials. 2020; 32(13): 1-14.
    [63] Wang J, Sun K, Cheng T, Jiang B, Deng C, Zhao Y, Xiao B. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence. 2020.
    [64] Feng S, Zhao H, Shi F, Cheng X, Wang M, Ma Y, Chen X. CPFNet: Context pyramid fusion network for medical image seg-mentation. IEEE transactions on medical imaging. 2020; 39(10): 3008-3018.
    [65] Liu Y, Yu J, Han Y. Understanding the effective receptive field in semantic image segmentation. Multimedia tools and applications. 2018; 77(17): 22159-22171.
    [66] Choudhury AR, Vanguri R, Jambawalikar SR, Kumar P. Segmentation of brain tumors using DeepLabv3+. In international MICCAI brainlesion workshop. 2018: 154-167.
    [67] Liu Z, Tong L, Chen L, Zhou F, Jiang Z, Zhang Q, Zhou H. CANet: Context aware network for brain glioma segmentation. IEEE transactions on medical imaging. 2021; 40(7): 1763-1777.
    [68] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. 2014: 1409-1556.
    [69] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
    [70] Xie S, Girshick R, Dollár P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500.
    [71] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Rabinovich A. Going deeper with convolutions. In proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.
    [72] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826.
    [73] Szegedy C, Ioffe S, Vanhoucke V, Alemi A. Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the AAAI conference on artificial intelligence. 2017; 31(1): 4278-4284.
    [74] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141.
    [75] Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv. 2017: 1704.04861.
    [76] Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. Mobilenetv2: Inverted residuals and linear bottlenecks. In proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 4510-4520.
    [77] Chollet F. Xception: Deep learning with depthwise separable convolutions. In proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1251-1258.
    [78] Gu R, Wang G, Song T, Huang R, Aertsen M, Deprest J, Zhang S. CA-Net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE transactions on medical imaging. 2020; 40(2): 699-711.
    [79] Wan S, Liang Y, Zhang Y. Deep convolutional neural networks for diabetic retinopathy detection by image classification. Computers and electrical engineering. 2018; 72(1): 274-282.
    [80] Jung H, Choi MK, Jung J, Lee JH, Kwon S, Young JW. ResNet-based vehicle classification and localization in traffic surveillance systems. In proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017: 61-67.
    [81] Jiang Y, Li Y, Zhang H. Hyperspectral image classification based on 3-D separable ResNet and transfer learning. IEEE geoscience and eemote sensing letters. 2019; 16(12): 1949-1953.
    [82] Zhang Q. A GPU-based residual network for medical image classification in smart medicine. Information sciences. 2020; 536(1): 91-100.
    [83] Misra D. Mish: A self regularized non-monotonic neural activation function. arXiv. 2019: 1908.086814.
    [84] Bochkovskiy A, Wang CY, Liao HYM. Yolov4: Optimal speed and accuracy of object detection. arXiv. 2020: 2004.10934.
    [85] Hahn S, Choi H. Understanding dropout as an optimization trick. Neurocomputing, 2020; 398(1): 64-70.
    [86] Gulec M, Tassoker M, Magat G, Lale B, Ozcan S, Orhan K. Three-dimensional volumetric analysis of the maxillary sinus: a cone-beam computed tomography study. Folia morphologica. 2020; 79(3): 557-562.
    [87] Saccucci M, Cipriani F, Carderi S, Di CG, D'Attilio M, Rodolfino D, Polimeni A. Gender assessment through three-dimensional analysis of maxillary sinuses by means of cone beam computed tomography. Eur rev med pharmacol sci. 2015; 19(2): 185-93.
    [88] Nowak R, Mehls G. X-rayfilm analysis of the sinus paranasales from cleft patients. Anatomischer anzeiger. 1977; 142(5): 451-470.
    [89] Sahlstrand-Johnson P, Jannert M, Strömbeck A, Abul-Kasim K. Computed tomography measurements of different dimensions of maxillary and frontal sinuses. BMC medical imaging. 2011; 11(1): 1-7.
    [90] Taghanaki SA, Abhishek K, Cohen JP, Cohen-Adad J, Hamarneh G. Deep semantic segmentation of natural and medical images: a review. Artificial intelligence review. 2021; 54(1): 137-178.
    [91] Pallanch JF, Yu L, Delone D, Robb R, Holmes III DR, Camp J, Kita H. Three‐dimensional volumetric computed tomographic scoring as an objective outcome measure for chronic rhinosinusitis: clinical correlations and comparison to Lund‐Mackay scoring. In International forum of allergy and rhinology. 2013; 3(12): 963-972.
    [92] Humphries SM, Centeno JP, Notary AM, Gerow J, Cicchetti G, Katial RK, Lynch DA. Volumetric assessment of paranasal sinus opacification on computed tomography can be automated using a convolutional neural network. In international forum of allergy and rhinology. 2020; 10(11): 1218-1225.
    [93] Do BA, Lands LC, Mascarella MA, Fanous A, Saint-Martin C, Manoukian JJ, Nguyen LH. Lund–Mackay and modified Lund–Mackay score for sinus surgery in children with cystic fibrosis. International journal of pediatric otorhinolaryngology. 2015; 79(8): 1341-1345.

    QR CODE