簡易檢索 / 詳目顯示

研究生: 白家鴻
Chia-Hung Bai
論文名稱: 漸進式關聯激發法之智慧農場應用
Progressive Contextual Excitation for Smart Farming Application
指導教授: 方文賢
Wen-Hsien Fang
呂政修
Jenq-Shiou Leu
口試委員: 方文賢
Wen-Hsien Fang
呂政修
Jenq-Shiou Leu
陳省隆
Hsing-Lung Chen
陳郁堂
Yie-Tarng Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 43
中文關鍵詞: 深度學習漸進式關聯激發智慧農業應用注意力機制細粒度影像分類
外文關鍵詞: deep learning, progressive contextual excitation, smart farming application, attention mechanism, fine-grained image classification
相關次數: 點閱:240下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本文目的是用來區分不同可可豆的類別,以利應用於智慧農業中。而 智慧農業應用的關鍵是要如何區分所有類別之間的微小差異,有時候可能 因為這些微小的差異導致農產品的味道品嘗起來十分地不同。
    我們提出的方案旨在構建更穩健的方式以更好地利用其中傳遞的信 息,其中關鍵概念是自適應性累積關聯表示法獲得相關的通道係數。具體 來說,我們引入了一個上下文記憶單元來逐步智慧化地選擇上下文通道統 計數據,然後使用累積的上下文統計數據來探索隱藏式通道狀態的通道係 數關係。
    因此,我們提出了漸進式關聯激發法 (PCE) 模組 [1],該模組採用基 於關注通道的架構來同時連繫上下文通道關係。透過上下文記憶單元的漸 進方式並藉由保留更詳細的信息可以有效地引導出高階層的表示法,這有 利於區分與處理智能農業應用任務的微小變化。最後將我們的模組運用在 可可豆數據集裡做評估與分析,這些數據集當中包括了各種細微差異的可 可豆類別,而實驗結果與現有的其他五種模組相比,我們的 PCE 模組都 有顯著的優勢與準確性。


    This thesis attempts to address the issue of smart farming application, which targets discriminating distinct cocoa bean categories. In smart farming ap- plication, one critical issue is how to distinguish little difference among all categories. Our proposed scheme is designed to construct a more ro- bust representation to better leverage textual information. The key concept is to adaptively accumulate contextual representations to obtain the con- textual channel attention. Specifically, we introduce a contextual memory cell to progressively select the contextual channel-wise statistics. The ac- cumulated contextual statistics are then used to explore the channel-wise relationship which implicitly correlates contextual channel states. Accord- ingly, we propose the progressive contextual excitation (PCE) module [1] employing channel-attention-based architecture to simultaneously corre- late the contextual channel-wise relationships. The progressive manner via the contextual memory cell demonstrates efficiently to guide high-level representation by keeping more detailed information, which benefits to dis- criminate small variations in tackling the smart farming application task. We evaluate our model on the cocoa beans dataset which comprises fine- grained cocoa bean categories. The experiments show a significant boost compared with existing approaches.

    RecommendationLetter............ ............ i ApprovalLetter................ ............ ii AbstractinChinese .......................... iii AbstractinEnglish .......................... iv Acknowledgements.......................... v Contents................................ vi ListofFigures............................. viii ListofTables ............................. x ListofAlgorithms........................... 1 1 Introduction ............................ 1 1.1 MotivationandPurpose................... 1 1.2 Summary .......................... 5 2 RelatedWorks........................... 6 3 RecurrentNeuralNetwork .................... 8 3.1 GRU............................. 12 4 ConvolutionalNeuralNetwork .................. 14 4.1 ResNet............................ 14 5 AttentionMechanism ....................... 18 5.1 SE-Net............................ 19 6 Method .............................. 22 6.1 FeatureExtraction...................... 22 6.2 ProgressiveContextualExcitation . . . . . . . . . . . . . 23 7 Experiments............................ 26 7.1 Dataset ........................... 26 7.2 ImplementationDetails................... 26 7.3 Comparisonwithotherapproaches . . . . . . . . . . . . . 27 7.4 Visualization ........................ 27 8 Conclusions ............................ 30 8.1 FutureWork......................... 30 References............................... 31

    [1] C.-H. Bai, S. W. Prakosa, H.-Y. Hsieh, and J.-S. Leu, “Progressive contextual excitation for smart farming application,” in CAIP, 2021.
    [2] R. M. Haralick, K. S. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern., vol. 3, no. 6, pp. 610–621, 1973.
    [3] B. S. Kumari, R. A. Kumar, M. Abhijeet, and S. P. Kumar, “Identification, classification & grading of fruits using machine learning & computer intelligence: a review,” in Journal of Ambient Intelligence and Humanized Computing, 2020.
    [4] H. M. Zawbaa, M. Hazman, M. Abbass, and A. E. Hassanien, “Automatic fruit classification using random forest algorithm,” in HIS, pp. 164–168, 2014.
    [5] A. Yro, C. E. N’zi, and K. Kpalma, “Cocoa beans fermentation degree assessment for quality con- trol using machine vision and multiclass svm classifier,” in International Journal of Innovation and Applied Studies, pp. 1711–1717, 2018.
    [6] A. I. Wayan, S. Mohamad, K. Andri, and W. Yunindri, “Determination of cocoa bean quality with image processing and artificial neural network,” in AFITA, 2010.
    [7] M. S. Hossain, M. Al-Hammadi, and G. Muhammad, “Automatic fruit classification using deep learn- ing for industrial applications,” in IEEE Transactions on Industrial Informatics, pp. 1027–1034, 2019.
    [8] M. S. Mahajan, “Optimization and classification of fruit using machine learning algorithm,” in IJIRST, 2016.
    [9] J. Tan, B. Balasubramanian, D. Sukha, S. Ramkissoon, and P. Umaharan, “Sensing fermentation de- gree of cocoa (theobroma cacao l.) beans by machine learning classification models based electronic nose system,” in Journal of Food Process Engineering, 2019.
    [10] M. Bacco, P. Barsocchi, E. Ferro, A. Gotta, and M. Ruggeri, “The digitisation of agriculture: a survey of research activities on smart farming,” Array, vol. 3-4, p. 100009, 2019.
    [11] Y. Adhitya, S. W. Prakosa, M. Köppen, and J.-S. Leu, “Feature extraction for cocoa bean digital image classification prediction for smart farming application,” in Agronomy, 2020.
    [12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, pp. 770–778, 2016.
    [13] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in CVPR, pp. 7132–7141, 2018.
    [14] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recogni- tion,” in ICLR, 2015.
    [15] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in CVPR, pp. 1–9, 2015.
    [16] M. Denil, L. Bazzani, H. Larochelle, and N. de Freitas, “Learning where to attend with deep architec- tures for image tracking,” Neural Comput., vol. 24, no. 8, pp. 2151–2184, 2012.
    [17] Y. Tang, N. Srivastava, and R. Salakhutdinov, “Learning generative models with visual attention,” in NIPS, pp. 1808–1816, 2014.
    [18] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in ICLR (Y. Bengio and Y. LeCun, eds.), 2015.
    [19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in NIPS, pp. 5998–6008, 2017.
    [20] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in ICML, vol. 37, pp. 2048– 2057, 2015.
    [21] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola, “Stacked attention networks for image question answering,” in CVPR, pp. 21–29, 2016.
    [22] H. Shi, H. Li, F. Meng, and Q. Wu, “Key-word-aware network for referring expression image seg- mentation,” in ECCV, vol. 11210, pp. 38–54, 2018.
    [23] B. T. Loo, T. Condie, M. Garofalakis, D. E. Gay, J. M. Hellerstein, P. Maniatis, R. Ramakrishnan, T. Roscoe, and I. Stoica, “Declarative networking: Language, execution and optimization,” in Pro- ceedings of the 2006 ACM Special International Conference on Management of Data (SIGMOD), (Chicago, Illinois, USA), pp. 97–108, ACM, Jun 2006.
    [24] “Badan standardisasi nasional (bsn),” in Biji kakao SNI 2323:2008 ICS 1.67.140.30 Kakao.; Badan Standardisasi Nasional: Jakarta, Indonesia, 2008.
    [25] B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discrimina- tive localization,” in CVPR, pp. 2921–2929, 2016.

    無法下載圖示 全文公開日期 2027/02/08 (校內網路)
    全文公開日期 2027/02/08 (校外網路)
    全文公開日期 2027/02/08 (國家圖書館:臺灣博碩士論文系統)
    QR CODE