簡易檢索 / 詳目顯示

研究生: 朱信
Hsin Chu
論文名稱: 分析深度學習方法於傷口圖片分類
A comparative study of deep learning approaches for wound image classification
指導教授: 沈上翔
Shan-Hsiang Shen
口試委員: 金台齡
Tai-Lin Chin
陳冠宇
Kuan-Yu Chen
黃琴雅
Chin-Ya Huang
沈上翔
Shan-Hiang Shen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 57
中文關鍵詞: 深度學習機器學習圖片分類傷口圖片分類
外文關鍵詞: Wound classification
相關次數: 點閱:420下載:20
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於醫療上的圖片取得非常的不易,傷口圖片分類是一個非常具有挑戰性的任務。在沒有大量資料的協助之下,我們研究了不同的針對於影像分類的機器學習方法,其中包括了遷移學習(Transfer learning)、少樣本學習(Few-shot learning)、資料增強(Data augmentation)。在我們的研究中,調查了許多近年非常具有代表性的卷積神經網路(convolutional neural network)架構,其中包括了已經在ImageNet資料及上取得顯著成果的模型。我們比較了VGG16、不同層數的ResNet、ResNeXt、還有較近的SENet架構性能,還有實作了少樣本學習的演算法,在現有的資料上以AC-GAN為基礎,做出了產生相似於原圖的假資料生成器來輔助分類。在實驗了不同的電腦視覺的方法後,我們解釋了遷移學習的強大之處,以及預訓練模型的重要性。在結果中提出對於現在醫療專業人員一個可行的傷口辨識方案。


    Wounds image classification is the task of classifying different kinds of wounds. This is a very challenging task, because medical images are difficult to collect. Without the support of large scale medical dataset, we evaluate different types of state-of-the-art machine learning method, including transfer learning based on pre-trained model, few-shot leaning, and data augmentation. In this work, we investigate the suitability of several recent convolutional neural network (CNN) architectures, which have shown remarkable results on ImageNet. We compare the performance of the networks VGG16, several ResNets, ResNeXt, and the recent SENet, and implement ProtoNet and FEAT few-shot algorithm. For data augmentation, we use AC-GAN as our fake data generator for additional input. Our dataset comes from our lab and is consist of 21 different kinds of wounds. We apply various computer vision method on our data. Importantly, the experiment explains the robust of transfer learning and the importance of pre-trained model. We find the feasible way of medical images recognition, and it can provide an accurate result for medical experts.

    Introduction Related Work Method Evaluation Conclusion

    Exploring the limits of weakly supervised pretraining
    Do CIFAR-10 classifiers gener-alize to cifar-10?
    Do better imagenet models transfer better?
    Spectral normalization forgenerative adversarial networks
    Conditional image synthesis with auxiliaryclassifier GANs
    A structured self-attentive sentence embedding
    Attention is all you need

    QR CODE