簡易檢索 / 詳目顯示

研究生: 洪鼎智
Ding-Zhi Hong
論文名稱: 驗證用於多類別細胞檢測和分類的 CW-NET 深度學習方法在骨髓檢查、有絲分裂圖檢查和有絲分裂檢測數據集挑戰的細胞跟蹤應用
Evaluation of the CW-NET deep learning method for multi-type cell detection and classification in application to bone marrow examination, mitotic figure examination and cell tracking with mitosis detection dataset challenge.
指導教授: 王靖維
Ching-Wei Wang
口試委員: 王靖維
Ching-Wei Wang
趙載光
Tai-Kuang Chao
許維君
Wei-Chun HSU
學位類別: 碩士
Master
系所名稱: 應用科技學院 - 醫學工程研究所
Graduate Institute of Biomedical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 67
中文關鍵詞: 深度學習多類型細胞檢測和分類全幅切片圖像骨髓檢查有 絲分裂圖像檢查
外文關鍵詞: deep learning, multi-type cell detection and classification, whole-slide image, bone marrow examination, mitotic figure examination
相關次數: 點閱:181下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 骨髓檢查是診斷血液疾病中最重要的指標之一,通常在顯微鏡下進行,使用油浸物鏡頭和總放大倍率為100× 的目鏡。另一方面,有絲分裂的檢測和鑑定對於準確的癌症診斷和分級以及預測治療成功和存活率至關重要。完全自動化的骨髓檢查和有絲分裂圖像檢查在全幅切片圖像上的應用需求很高,但具有挑戰性且研究較少。
    首先,顯微鏡圖像檢查的複雜性和差異性不良是由於細胞類型的多樣性,多類型細胞成熟過程中微妙的內部差異,細胞重疊,脂質干擾和染色差異所致。其次,對於全幅切片圖像的手動標記是耗時又費力的,且易受觀察者內部變異性的影響,這導致監督信息僅限於由人類註釋的有限、易識別且分散的細胞。
    第三,當訓練數據稀疏標記時,許多未標記的感興趣對象被錯誤定義為背景,這嚴重困惑了人工智能學習者。本文提出了一種高效且完全自動的CW-Net方法,用於解決上述三個問題,並展示了它在骨髓檢查和有絲分裂圖像檢查中的優越性能。實驗結果顯示,所提出的CW-Net 在一個包含19 種骨髓細胞類型的大
    型骨髓WSI 數據集(共有16,456 個註釋細胞)和一個用於有絲分裂圖像評估的大規模WSI 數據集(共有262,481 個註釋細胞,包含五種細胞類型)上具有強韌性和泛化能力。
    基於CW-Net 建立一個活體細胞的追蹤演算法,利用此演算法提高細胞變異的診斷準確率,以及降低活體細胞追蹤的人力,提高診斷的效率。透過活體細胞的影像追蹤數據結果以驗證此演算法的有效性及準確性。


    Bone marrow examination is one of the most important indicators in diagnosing hematologic disorders and is typically performed under the microscope via oil-immersion objective lens with a total 100X objective magnification. On the other hand, mitotic detection and identification is critical not only for accurate cancer diagnosis and grading but also for predicting therapy success and survival.Fully automated bone marrow examination and mitotic figure examination from whole-slide images is highly demanded but challenging and poorly explored. Firstly, the complexity and poor reproducibility of microscopic image examination are due to the cell type diversity, delicate intra-lineage discrepancy within the multi-type cell maturation process, cells overlapping, lipid interference and stain variation.
    Secondly, manual annotation on whole-slide images is tedious, laborious and subject to intra-observer variability, which causes the supervised information restricted to limited, easily identifiable and scattered cells annotated by humans.

    Thirdly, when the training data is sparsely labeled, many unlabeled objects of interest are wrongly defined as background, which severely confuses AI learners.This paper presents an efficient and fully automatic CW-Net approach to address the three issues mentioned above and demonstrates its superior performance on both bone marrow examination and mitotic figure examination. The experimental results demonstrate the robustness and generalizability of the proposed CW-Net on a large bone marrow WSI dataset with 16,456 annotated cells of 19 bone marrow cell types and a large-scale WSI dataset for mitotic figure assessment with 262,481 annotated cells of five cell types.

    Establish a live cell tracking algorithm based on CW-Net to improve the diagnostic accuracy of cell variations and reduce the manpower required for live cell tracking, thereby enhancing diagnostic efficiency. Validate the effectiveness and accuracy of this algorithm through the results of live cell imaging tracking data.

    摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV 致謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI 目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII 圖目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X 表目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XII 第一章緒論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 研究目標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 論文貢獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 第二章研究背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 資料集製備介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 劉氏染色之骨髓細胞抹片樣本(Liu’s stain,Whole slide image, WSI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 H&E 染色之犬皮膚肥大腫瘤細胞有絲分裂樣本(Hematoxylin and eosin stain, H&E, Whole Slide Image, WSI ) . . . . . . . . . 7 2.1.3 活體有絲分裂細胞影像(Live Mitosis cell videos) . . . . . . . . 9 2.2 研究相關方法應用之文獻. . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 弱監督式學習. . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.2 半自動骨髓細胞分析方法. . . . . . . . . . . . . . . . . . . . 11 2.2.3 半自動有絲分裂細胞分析方法. . . . . . . . . . . . . . . . . . 12 2.2.4 活體細胞影像追蹤分析方法. . . . . . . . . . . . . . . . . . . 13 第三章研究方法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1 方法1: 多類別骨髓細胞與腫瘤細胞有絲分裂偵測與分類. . . . . . . 14 3.1.1 CW-Net 網路架構. . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.2 雙層過濾負樣本採樣(Dual Layer FNIS) . . . . . . . . . . . . . 18 3.1.3 多類別非極大值抑制(Multi Class Non Maximum Suppression, MCNMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.4 資料擴增和正規化(Data Augmentation and Normalization) . . . 21 3.1.5 適應性學習(Adaptive Learning, AL) . . . . . . . . . . . . . . . 21 3.1.6 CW-Net 骨幹網路架構. . . . . . . . . . . . . . . . . . . . . . 22 3.2 方法2: 活體細胞影像偵測與追蹤. . . . . . . . . . . . . . . . . . . . 23 3.2.1 過濾過大的偵測框(Filter Big Detect Box) . . . . . . . . . . . . 23 3.2.2 運用影像處理索伯運算(Sobel) 過濾誤判的偵測框(Filter Detect Wronging Box) . . . . . . . . . . . . . . . . . . . . . . . 24 3.2.3 利用每5 幀為基礎挑選出正確的偵測框(Pick up Verify Box) . 25 3.2.4 對每一幀的偵測框進行追蹤,並進行編號排序. . . . . . . . 26 第四章實驗設計與結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.1 實驗資料集介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.1.1 骨髓細胞抹片資料集介紹(Bone marrow Dataset) . . . . . . . . 28 4.1.2 犬皮膚腫瘤細胞有絲分裂資料集介紹(Mitotic Dataset) . . . . 29 4.1.3 活體有絲分裂細胞影像資料集介紹(Live Mitosis cell Dataset) . 30 4.2 數據分析與影像結果. . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2.1 數據分析方法. . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2.2 骨髓抹片細胞資料集數據分析與影像結果(Bone Marrow Dataset) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2.3 犬皮膚與犬隻乳癌細胞有絲分裂資料集數據分析與影像結果 (Mitotic Dataset) . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2.4 活體有絲分裂細胞影像資料集數據分析與影像結果(Live Mitosis cell Dataset) . . . . . . . . . . . . . . . . . . . . . . . . 42 第五章結論與未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.1 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.2 未來發展. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 圖2.1 Bone marrow 染色處理後的抹片圖. . . . . . . . . . . . . . . . . . 7 圖2.2 犬皮膚肥大細胞瘤H&E 染色切片圖. . . . . . . . . . . . . . . . . 8 圖2.3 Live mitosis cell 活體有絲分裂細胞影像. . . . . . . . . . . . . . . 9 圖3.1 多類別骨髓抹片細胞偵測流程圖. . . . . . . . . . . . . . . . . . . 15 圖3.2 弱監督式學習偵測處理步驟. . . . . . . . . . . . . . . . . . . . . . 17 圖3.3 多類別骨髓抹片細胞偵測流程圖. . . . . . . . . . . . . . . . . . . 19 圖3.4 弱監督式學習追蹤偵測流程圖. . . . . . . . . . . . . . . . . . . . 23 圖3.5 弱監督式學習追蹤偵測過濾過大標籤. . . . . . . . . . . . . . . . 24 圖3.6 弱監督式學習追蹤偵測過濾誤判標籤. . . . . . . . . . . . . . . . 25 圖3.7 弱監督式學習追蹤偵測確認標籤. . . . . . . . . . . . . . . . . . . 26 圖3.8 弱監督式學習追蹤標籤. . . . . . . . . . . . . . . . . . . . . . . . 27 圖4.1 骨髓抹片細胞類別與標記數量. . . . . . . . . . . . . . . . . . . . 29 圖4.2 mitotic 細胞類別與標記數量. . . . . . . . . . . . . . . . . . . . . . 29 圖4.3 Live Mitosis cell 有絲分裂細胞類別與標記數量. . . . . . . . . . . 30 圖4.4 混淆矩陣(confusion matrix) . . . . . . . . . . . . . . . . . . . . . . 31 圖4.5 骨髓抹片結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 圖4.6 骨髓抹片細胞類別confusion matrix . . . . . . . . . . . . . . . . . . 36 圖4.7 骨髓抹片Kappa value confusion matrix . . . . . . . . . . . . . . . . 38 圖4.8 CW-Net 方法與Bertram 方法[1] 結果比較圖。. . . . . . . . . . . 41 圖4.9 live mitosis cell 影像結果. . . . . . . . . . . . . . . . . . . . . . . . 43 表2-1 有絲分裂細胞取樣來源與應用[2] . . . . . . . . . . . . . . . . . . . 10 表3-1 CW-Net backbone network Resnet101。. . . . . . . . . . . . . . . . 22 表4-1 Quantitative evaluation in identification of BM cells . . . . . . . . . . 34 表4-2 Intra- and inter-observer reliability analysis using Cohen’s kappa . . . 38 表4-3 Quantitative evaluation in mitotic figure examination. . . . . . . . . . 40 表4-4 Quantitative evaluation in identification of live mitosis cells . . . . . . 42 表4-5 Live mitosis cell imaging mitosis detection dataset evaluation measures [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    [1] C. A. Bertram, M. Aubreville, C. Marzahl, A. Maier, and R. Klopfleisch, “A large
    scale dataset for mitotic figure assessment on whole slide images of canine cutaneous
    mast cell tumor,” Scientific data, vol. 6, no. 1, p. 274, 2019.
    [2] S. Anjum and D. Gurari, “Ctmc: Cell tracking with mitosis detection dataset challenge,”
    in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
    Recognition Workshops, pp. 982–983, 2020.
    [3] A. O. Patrick Dendorfer and L. Leal-Taixé, “Multiple object tracking benchmark.”
    [4] S.-H. Lee, W. Erber, A. Porwit, M. Tomonaga, L. Peterson, and I. C. S. I. Hematology,
    “Icsh guidelines for the standardization of bone marrow specimens and reports,”
    International journal of laboratory hematology, vol. 30, no. 5, pp. 349–364, 2008.
    [5] P. L. Greenberg, H. Tuechler, J. Schanz, G. Sanz, G. Garcia-Manero, F. Solé, J. M.
    Bennett, D. Bowen, P. Fenaux, F. Dreyfus, et al., “Revised international prognostic
    scoring system for myelodysplastic syndromes,” Blood, The Journal of the American
    Society of Hematology, vol. 120, no. 12, pp. 2454–2465, 2012.
    [6] S. Swerdlow, E. Campo, N. L. Harris, E. Jaffe, S. Pileri, H. Stein, J. Thiele, D. Arber,
    R. Hasserjian, M. Le Beau, et al., “Who classification of tumours of haematopoietic
    and lymphoid tissues (revised 4th edition),” IARC: Lyon, vol. 421, 2017.
    [7] S. Kumar, B. Paiva, K. C. Anderson, B. Durie, O. Landgren, P. Moreau, N. Munshi,
    S. Lonial, J. Bladé, M.-V. Mateos, et al., “International myeloma working group
    consensus criteria for response and minimal residual disease assessment in multiple
    myeloma,” The lancet oncology, vol. 17, no. 8, pp. e328–e346, 2016.
    [8] G. Campanella, M. G. Hanna, L. Geneslaw, A. Miraflor, V. W. K. Silva, K. J. Busam,
    E. Brogi, V. E. Reuter, D. S. Klimstra, and T. J. Fuchs, “Clinical-grade computational
    pathology using weakly supervised deep learning on whole slide images,” Nature
    medicine, vol. 25, no. 8, pp. 1301–1309, 2019.
    [9] R. Chandradevan, A. A. Aljudi, B. R. Drumheller, N. Kunananthaseelan, M. Amgad,
    D. A. Gutman, L. A. Cooper, and D. L. Jaye, “Machine-based detection and classification for bone marrow aspirate differential counts: initial development focusing on
    nonneoplastic cells,” Laboratory investigation, vol. 100, no. 1, pp. 98–109, 2020.
    [10] L. Meintker, J. Ringwald, M. Rauh, and S. W. Krause, “Comparison of automated
    differential blood cell counts from abbott sapphire, siemens advia 120, beckman
    coulter dxh 800, and sysmex xe-2100 in normal and pathologic samples,” American
    journal of clinical pathology, vol. 139, no. 5, pp. 641–650, 2013.
    [11] M. C. Balkenhol, D. Tellez, W. Vreuls, P. C. Clahsen, H. Pinckaers, F. Ciompi,
    P. Bult, and J. A. van der Laak, “Deep learning assisted mitotic counting for breast
    cancer,” Laboratory investigation, vol. 99, no. 11, pp. 1596–1606, 2019.
    [12] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global
    cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for
    36 cancers in 185 countries,” CA: a cancer journal for clinicians, vol. 68, no. 6,
    pp. 394–424, 2018.
    [13] C. A. Bertram, M. Aubreville, C. Gurtner, A. Bartel, S. M. Corner, M. Dettwiler,
    O. Kershaw, E. L. Noland, A. Schmidt, D. G. Sledge, et al., “Computerized calculation
    of mitotic count distribution in canine cutaneous mast cell tumor sections:
    mitotic count is area dependent,” Veterinary pathology, vol. 57, no. 2, pp. 214–226,
    2020.
    [14] C. Malon, E. Brachtel, E. Cosatto, H. P. Graf, A. Kurata, M. Kuroda, J. S. Meyer,
    A. Saito, S. Wu, and Y. Yagi, “Mitotic figure recognition: Agreement among pathologists
    and computerized detector,” Analytical Cellular Pathology, vol. 35, no. 2,
    pp. 97–100, 2012.
    [15] M. Aubreville, N. Stathonikos, C. A. Bertram, R. Klopfleisch, N. Ter Hoeve,
    F. Ciompi, F. Wilm, C. Marzahl, T. A. Donovan, A. Maier, et al., “Mitosis domain
    generalization in histopathology images—the midog challenge,” Medical Image
    Analysis, vol. 84, p. 102699, 2023.
    [16] M. Veta, Y. J. Heng, N. Stathonikos, B. E. Bejnordi, F. Beca, T. Wollmann, K. Rohr,
    M. A. Shah, D. Wang, M. Rousson, et al., “Predicting breast tumor proliferation
    from whole-slide images: the tupac16 challenge,” Medical image analysis, vol. 54,
    pp. 111–121, 2019.
    [17] L. Roux, “Detection of mitosis and evaluation of nuclear atypia score in breast cancer
    histological images,” in 22nd International Conference on Pattern Recognition,
    2014.
    [18] R. Ludovic, R. Daniel, L. Nicolas, K. Maria, I. Humayun, K. Jacques, C. Frédérique,
    G. Catherine, et al., “Mitosis detection in breast cancer histological images an icpr
    2012 contest,” Journal of pathology informatics, vol. 4, no. 1, p. 8, 2013.
    [19] C. M. Franz, G. E. Jones, and A. J. Ridley, “Cell migration in development and
    disease,” Developmental cell, vol. 2, no. 2, pp. 153–158, 2002.
    [20] L. G. Rodriguez, X. Wu, and J.-L. Guan, “Wound-healing assay,” Cell Migration:
    Developmental Methods and Protocols, pp. 23–29, 2005.
    [21] J. Condeelis and J. W. Pollard, “Macrophages: obligate partners for tumor cell migration,
    invasion, and metastasis,” Cell, vol. 124, no. 2, pp. 263–266, 2006.
    [22] R. Evans, I. Patzak, L. Svensson, K. De Filippo, K. Jones, A. McDowall, and
    N. Hogg, “Integrins in immunity,” Journal of cell science, vol. 122, no. 2, pp. 215–
    225, 2009.
    [23] P. Dollár, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: A benchmark,”
    in 2009 IEEE conference on computer vision and pattern recognition, pp. 304–311,
    IEEE, 2009.
    [24] J. Zhuang, Y. Dong, H. Bai, and G. Wang, “Multi-branch siamese network for high
    performance online visual tracking,” in Asian Conference on Machine Learning,
    pp. 519–534, PMLR, 2019.
    [25] A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler, “Mot16: A benchmark
    for multi-object tracking,” arXiv preprint arXiv:1603.00831, 2016.
    [26] L. Wen, D. Du, Z. Cai, Z. Lei, M. Chang, H. Qi, J. Lim, M. Yang, and S. Lyu, “A
    new benchmark and protocol for multi-object detection and tracking,” arXiv CoRR,
    abs/1511.04136, 2015.
    [27] C. W. Elston, I. O. Ellis, and S. E. Pinder, “Pathological prognostic factors in breast
    cancer,” Critical reviews in oncology/hematology, vol. 31, no. 3, pp. 209–223, 1999.
    [28] P. J. van Diest, J. P. Baak, P. Matze-Cok, E. Wisse-Brekelmans, C. Van Galen,
    P. Kurver, S. Bellot, J. Fijnheer, L. Van Gorp, W. Kwee, et al., “Reproducibility
    of mitosis counting in 2,469 breast cancer specimens: results from the multicenter
    morphometric mammary carcinoma project,” Human pathology, vol. 23, no. 6,
    pp. 603–607, 1992.
    [29] E. Romansik, C. Reilly, P. Kass, P. Moore, and C. London, “Mitotic index is predictive
    for survival for canine cutaneous mast cell tumors,” Veterinary pathology,
    vol. 44, no. 3, pp. 335–341, 2007.
    [30] L. B. Elston, F. A. Sueiro, J. N. Cavalcanti, and K. Metze, “The importance of the
    mitotic index as a prognostic factor for survival of canine cutaneous mast cell tumors:
    a validation study,” Veterinary pathology, vol. 46, no. 2, pp. 362–364, 2009.
    [31] E. Edmondson, A. Hess, and B. Powers, “Prognostic significance of histologic features
    in canine renal cell carcinomas: 70 nephrectomies,” Veterinary pathology,
    vol. 52, no. 2, pp. 260–268, 2015.
    [32] D. Meuten, F. Moore, and J. George, “Mitotic count and the field of view area: time
    to standardize,” 2016.
    [33] J. S. Meyer, C. Alvarez, C. Milikowski, N. Olson, I. Russo, J. Russo, A. Glass, B. A.
    Zehnbauer, K. Lister, R. Parwaresch, et al., “Breast carcinoma malignancy grading
    by bloom–richardson system vs proliferation index: reproducibility of grade and
    advantages of proliferation index,” Modern pathology, vol. 18, no. 8, pp. 1067–1078,
    2005.
    [34] L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, and R. Qu, “A survey of deep
    learning-based object detection,” IEEE access, vol. 7, pp. 128837–128868, 2019.
    [35] Y. Hu, M. Modat, E. Gibson, W. Li, N. Ghavami, E. Bonmati, G. Wang, S. Bandula,
    C. M. Moore, M. Emberton, et al., “Weakly-supervised convolutional neural
    networks for multimodal image registration,” Medical image analysis, vol. 49, pp. 1–
    13, 2018.
    [36] C.-W. Wang, S.-C. Huang, Y.-C. Lee, Y.-J. Shen, S.-I. Meng, and J. L. Gaol, “Deep
    learning for bone marrow cell detection and classification on whole-slide images,”
    Medical Image Analysis, vol. 75, p. 102270, 2022.
    [37] H. Bilen and A. Vedaldi, “Weakly supervised deep detection networks,” in Proceedings
    of the IEEE conference on computer vision and pattern recognition, pp. 2846–
    2854, 2016.
    [38] Z. Yang, D. Mahajan, D. Ghadiyaram, R. Nevatia, and V. Ramanathan, “Activity
    driven weakly supervised object detection,” in Proceedings of the IEEE/CVF Conference
    on Computer Vision and Pattern Recognition, pp. 2917–2926, 2019.
    [39] C. Cao, Y. Huang, Y. Yang, L. Wang, Z. Wang, and T. Tan, “Feedback convolutional
    neural network for visual localization and segmentation,” IEEE transactions
    on pattern analysis and machine intelligence, vol. 41, no. 7, pp. 1627–1640, 2018.
    [40] C. Reta, L. Altamirano, J. A. Gonzalez, R. Diaz-Hernandez, H. Peregrina, I. Olmos,
    J. E. Alonso, and R. Lobato, “Segmentation and classification of bone marrow cells
    images using contextual information for medical diagnosis of acute leukemias,” PloS
    one, vol. 10, no. 6, p. e0130805, 2015.
    [41] P. Ghosh, D. Bhattacharjee, and M. Nasipuri, “Blood smear analyzer for white blood
    cell counting: a hybrid microscopic image analyzing technique,” Applied Soft Computing,
    vol. 46, pp. 629–638, 2016.
    [42] S. Mishra, B. Majhi, P. K. Sa, and L. Sharma, “Gray level co-occurrence matrix and
    random forest based acute lymphoblastic leukemia detection,” Biomedical Signal
    Processing and Control, vol. 33, pp. 272–280, 2017.
    [43] J. W. Choi, Y. Ku, B. W. Yoo, J.-A. Kim, D. S. Lee, Y. J. Chai, H.-J. Kong, and
    H. C. Kim, “White blood cell differential count of maturation stages in bone marrow
    smear using dual-stage convolutional neural networks,” PloS one, vol. 12, no. 12,
    p. e0189259, 2017.
    [44] K. Kimura, Y. Tabe, T. Ai, I. Takehara, H. Fukuda, H. Takahashi, T. Naito, N. Komatsu,
    K. Uchihashi, and A. Ohsaka, “A novel automated image analysis system
    using deep convolutional neural networks can assist to differentiate mds and aa,”
    Scientific reports, vol. 9, no. 1, pp. 1–9, 2019.
    [45] A. T. Sahlol, P. Kollmannsberger, and A. A. Ewees, “Efficient classification of white
    blood cell leukemia with improved swarm optimization of deep features,” Scientific
    Reports, vol. 10, no. 1, p. 2536, 2020.
    [46] Y. Xie, F. Xing, X. Shi, X. Kong, H. Su, and L. Yang, “Efficient and robust cell detection:
    A structured regression approach,” Medical image analysis, vol. 44, pp. 245–
    254, 2018.
    [47] B. Hu, Y. Tang, I. Eric, C. Chang, Y. Fan, M. Lai, and Y. Xu, “Unsupervised learning
    for cell-level visual representation in histopathology images with generative adversarial
    networks,” IEEE journal of biomedical and health informatics, vol. 23, no. 3,
    pp. 1316–1328, 2018.
    [48] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
    A. Courville, and Y. Bengio, “Generative adversarial nets. z. ghahramani, m.
    welling, c. cortes, nd lawrence, kq weinberger, eds,” Advances in Neural Information
    Processing Systems, vol. 27, 2014.
    [49] T.-C. Yu, W.-C. Chou, C.-Y. Yeh, C.-K. Yang, S.-C. Huang, F. M. Tien, C.-Y. Yao,
    C.-L. Cheng, M.-K. Chuang, H.-F. Tien, et al., “Automatic bone marrow cell identification
    and classification by deep neural network,” Blood, vol. 134, p. 2084, 2019.
    [50] H. Chen, Q. Dou, X. Wang, J. Qin, and P. Heng, “Mitosis detection in breast cancer
    histology images via deep cascaded networks,” in Proceedings of the AAAI conference
    on artificial intelligence, vol. 30, 2016.
    [51] C. Li, X. Wang, W. Liu, and L. J. Latecki, “Deepmitosis: Mitosis detection via deep
    detection, verification and segmentation networks,” Medical image analysis, vol. 45,
    pp. 121–133, 2018.
    [52] D. K. Das and P. K. Dutta, “Efficient automated detection of mitotic cells from breast
    histological images using deep convolution neutral network with wavelet decomposed
    patches,” Computers in biology and medicine, vol. 104, pp. 29–42, 2019.
    [53] D. Cai, X. Sun, N. Zhou, X. Han, and J. Yao, “Efficient mitosis detection in breast
    cancer histology images by rcnn,” in 2019 IEEE 16th International Symposium on
    Biomedical Imaging (ISBI 2019), pp. 919–922, IEEE, 2019.
    [54] M. Z. Alom, T. Aspiras, T. M. Taha, T. Bowen, and V. K. Asari, “Mitosisnet: end-toend
    mitotic cell detection by multi-task learning,” IEEE Access, vol. 8, pp. 68695–
    68710, 2020.
    55] A. Sohail, A. Khan, N. Wahab, A. Zameer, and S. Khan, “A multi-phase deep cnn
    based mitosis detection framework for breast cancer histopathological images,” Scientific
    Reports, vol. 11, no. 1, pp. 1–18, 2021.
    [56] N. Aharon, R. Orfaig, and B.-Z. Bobrovsky, “Bot-sort: Robust associations multipedestrian
    tracking,” arXiv preprint arXiv:2206.14651, 2022.
    [57] W. Lv, S. Xu, Y. Zhao, G. Wang, J. Wei, C. Cui, Y. Du, Q. Dang, and Y. Liu, “Detrs
    beat yolos on real-time object detection,” arXiv preprint arXiv:2304.08069, 2023.
    [58] D. Gordon, A. Farhadi, and D. Fox, “Re ^3: Re al-time recurrent regression networks
    for visual tracking of generic objects,” IEEE Robotics and Automation Letters, vol. 3,
    no. 2, pp. 788–795, 2018.
    [59] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fullyconvolutional
    siamese networks for object tracking,” in Computer Vision–ECCV
    2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings,
    Part II 14, pp. 850–865, Springer, 2016.
    [60] D. Held, S. Thrun, and S. Savarese, “Learning to track at 100 fps with deep regression
    networks,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam,
    The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 749–765,
    Springer, 2016.
    [61] H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual
    tracking,” in Proceedings of the IEEE conference on computer vision and pattern
    recognition, pp. 4293–4302, 2016.
    [62] S. Xu, X. Wang, W. Lv, Q. Chang, C. Cui, K. Deng, G. Wang, Q. Dang, S. Wei, Y. Du,
    et al., “Pp-yoloe: An evolved version of yolo,” arXiv preprint arXiv:2203.16250,
    2022.
    [63] X. Huang, X. Wang, W. Lv, X. Bai, X. Long, K. Deng, Q. Dang, S. Han, Q. Liu,
    X. Hu, et al., “Pp-yolov2: A practical object detector,” arXiv preprint arXiv:
    2104.10419, 2021.
    [64] D. Tellez, G. Litjens, P. Bándi, W. Bulten, J.-M. Bokhorst, F. Ciompi, and J. van der
    Laak, “Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology,” Medical image analysis,
    vol. 58, p. 101544, 2019.
    [65] J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale
    hierarchical image database,” pp. 248–255, 2009.
    [66] I. Sobel, R. Duda, P. Hart, and J. Wiley, “Sobel-feldman operator,” Preprint
    at https://www. researchgate. net/profile/Irwin-Sobel/publication/285159837. Accessed,
    vol. 20, 2022.
    [67] U. Gianelli, A. Bossi, I. Cortinovis, E. Sabattini, C. Tripodo, E. Boveri, A. Moro,
    R. Valli, M. Ponzoni, A. M. Florena, et al., “Reproducibility of the who histological
    criteria for the diagnosis of philadelphia chromosome-negative myeloproliferative
    neoplasms,” Modern Pathology, vol. 27, no. 6, pp. 814–822, 2014.
    [68] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance:
    the clear mot metrics,” EURASIP Journal on Image and Video Processing, vol. 2008,
    pp. 1–10, 2008.
    [69] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures
    and a data set for multi-target, multi-camera tracking,” in European conference on
    computer vision, pp. 17–35, Springer, 2016.
    [70] V. Ulman, M. Maška, K. E. Magnusson, O. Ronneberger, C. Haubold, N. Harder,
    P. Matula, P. Matula, D. Svoboda, M. Radojevic, et al., “An objective comparison of
    cell-tracking algorithms,” Nature methods, vol. 14, no. 12, pp. 1141–1152, 2017.
    [71] Y. Li, C. Huang, and R. Nevatia, “Learning to associate: Hybridboosted multi-target
    tracker for crowded scene,” in 2009 IEEE conference on computer vision and pattern
    recognition, pp. 2953–2960, IEEE, 2009.
    [72] Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang,
    “Bytetrack: Multi-object tracking by associating every detection box,” in European
    Conference on Computer Vision, pp. 1–21, Springer, 2022.
    [73] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint
    arXiv:1804.02767, 2018.
    [74] R. Bao, N. M. Al-Shakarji, F. Bunyak, and K. Palaniappan, “Dmnet: Dual-stream
    marker guided deep network for dense cell segmentation and lineage tracking,”
    in Proceedings of the IEEE/CVF International Conference on Computer Vision,
    pp. 3361–3370, 2021.

    無法下載圖示
    全文公開日期 2073/08/10 (校外網路)
    全文公開日期 2073/08/10 (國家圖書館:臺灣博碩士論文系統)
    QR CODE