簡易檢索 / 詳目顯示

研究生: 薛宇翔
Yu-Hsiang Hsueh
論文名稱: 一個主要利用氣象雷達資料進行定量降雨估計的深度學習方法:以臺灣地區梅雨季為例
A Deep Learning Approach to Quantitative Precipitation Estimation Mainly Using Weather Radar Data: Taking the Plum Rain Season in Taiwan as an Example
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 黃榮堂
Rong-Tang Huang
謝君偉
Jun-Wei Hsieh
馮輝文
Huei-Wen Ferng
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 57
中文關鍵詞: 深度學習降雨估計生成對抗網路氣象雷達資料風場資料梅雨季
外文關鍵詞: Deep learning, precipitation estimation, generative adversarial network, weather radar data, wind field data, plum rain season
相關次數: 點閱:361下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在氣象資訊的研究中,定量降雨估計是相當重要的一項技術,透過這項技術,我 們可以獲得二維平面的降雨資訊,此資訊對於降雨區域分佈的瞭解,影響甚大。過去 氣象局往往只使用雷達回波與測站雨量計合成降雨估計圖,然而合成效果容易受限於 測站雨量計的分佈狀況,因此,在本篇論文中,我們將深度學習模型套用定量降雨估 計上,並將焦點聚焦在臺灣地區的梅雨季期間,希望能透過深度學習的方式,在龐大 的氣象資訊內學習到定量降雨估計的方法。
以往應用深度學習的定量降雨估計,通常只有使用雷達回波資料或衛星觀測資料 來進行估計,本篇論文嘗試同時使用風場資料與三種氣象雷達資料:雷達回波、比差 異相位差、差異反射率,目的是希望深度學習模型能從多元的資料中提取更多的特徵, 使預測更為準確。為了實現這一點,我們提出了一個使用 pix2pix 架構的影像生成模 型,它是由一個生成器與鑑別器組合而成,其中,生成器可以依照我們輸入的氣象雷 達資料與風場資料生成對應的降雨估計影像,而鑑別器則負責判別其輸入影像是否為 真實影像,透過兩者相互競爭可促使生成器生成與輸入資訊相符且帶有真實性的降雨 估計影像;此外,我們也透過彈性化的輸入架構,使模型可同時在有風場及無風場的 環境下運行。
在實驗結果方面,我們將實驗的條件分為有風場及無風場的環境,並將我們提出 的模型與 U-Net 和 pix2pix 進行了比較。實驗結果顯示,我們提出的模型表現良好, 且相較於其它兩個模型,我們提出的模型能夠更有效地利用風場資料。在實驗數據的 表現上,我們分別在有無風場的環境下進行定量降雨估計評估,實驗結果的均方誤差 為 1.031/1.049,低於 U-Net 獲得的 2.458/1.213,也低於獲得的 pix2pix 1.299/1.294, 而臨界成功指數、可偵測機率、誤報率分別為 0.566/0.566、0.716/0.723、0.271/0.277, 這些數據皆優於 U-Net 獲得的 0.504/0.526、0.628/0.666、0.282/0.285,以及絕大部分 優於 pix2pix 所獲得的 0.518/0.527、0.650/0.659、0.282/0.274。


Quantitative precipitation estimation is a very important technique in the field of meteorology. Using this technique, we can obtain two-dimensional rainfall information, which has a great impact on understanding the regional distribution of rainfall. In the past, the Central Weather Bureau usually only used radar reflectivity and station rain gauges to synthesize rainfall estimation image, but the distribution of rain gauges easily limited the combined effect. Therefore, in this thesis, we apply a deep learning model to quantitative precipitation estimation, and focus on the Plum Rain Season in Taiwan. We hope to learn quantitative precipitation estimation methods from huge meteorological information through deep learning.
The existing quantitative precipitation estimation using deep learning methods have typically only used radar reflectivity or satellite observation data. This thesis attempts to employ wind field data and three types of weather radar data: radar reflectivity, differential reflectivity, and specific differential phase simultaneously. The purpose is that the deep learning model can extract more features from the multivariate data to make the prediction more accurate. To achieve this, we propose an image generation model using the pix2pix architecture, which consists of a generator and a discriminator. The generator can generate the corresponding rainfall estimation image according to its input weather radar data and wind field data, whereas the discriminator is responsible for judging whether the input image is a real image. By the competition between the two, the generator can generate a precipitation estimation image that is consistent with the input information and realistic. Furthermore, we enable the model to operate in environments with and without wind field data through a flexible input architecture.
ii
In the part of the experimental results, we group the experiments into the experimental conditions with and without wind field data, and compare our proposed model with U-Net and pix2pix. Experimental results reveal that our proposed model performs well, which can utilize wind field data more effectively than the other two models do. In terms of performance on experimental data, we carry out the evaluation of quantitative precipitation estimation in the environments with or without wind field data, and the mean square error of the experimental results reaches 1.031/1.049, which is lower than U-Net 2.458/1.213, and also lower than pix2pix 1.299/1.294. In addition, the critical success index, probability of detection, and false alarm rate are 0.566/0.566, 0.716/0.723, 0.271/0.277, respectively, which are superior to those obtained by U-Net of 0.504/0.526, 0.628/0.666, 0.282/0.285, and mostly better than 0.518/0.527, 0.650/0.659, 0.282/0.274 obtained by pix2pix.

Contents 中文摘要 i Abstract ii 致謝 iv List of Figures vii List of Tables x Chapter 1 Introduction 1 1.1 Overview 1 1.2 Motivation 2 1.3 System Description 3 1.4 Thesis Organization 5 Chapter 2 Related Work 6 2.1 Quantitative Precipitation Estimates Method Based on Traditional Methods 6 2.2 Quantitative Precipitation Estimates Method Based on Deep Learning 8 2.2.1 Artificial neural networks 8 2.2.2 Convolutional neural networks 9 2.2.3 Generative adversarial networks 11 Chapter 3 Our Proposed Method for Quantitative Precipitation Estimation 13 3.1 Data Preprocessing 13 3.2 Quantitative Precipitation Estimation Model 18 3.2.1 U-Net generator 19 3.2.2 PatchGAN discriminator 21 3.2.3 GAN loss function 22 Chapter 4 Experimental Results and Discussion 24 4.1 Experimental Environment Setup 24 4.2 Data Description 25 4.2.1 Radar dataset 26 4.2.2 WISSDOM wind field dataset 30 4.2.3 Rainfall dataset 32 4.3 Data Visualization 32 4.3.1 Data visualization of radar dataset 33 4.3.2 Data visualization of wind field dataset 33 4.3.3 Data visualization of rainfall dataset 35 4.4 Results of Quantitative Precipitation Estimation 35 4.4.1 Evaluation metrics 36 4.4.2 Comparison of our model and the other 38 4.4.3 Ablation experiments 50 Chapter 5 Conclusions and Future Work 52 5.1 Conclusions 52 5.2 Future Work 53 References 55

[1] P. Nguyen et al., “The CHRS Data Portal, an easily accessible public repository for PERSIANN global satellite precipitation data,” Scientific Data, vol. 6, no. 1, pp. 1- 10, 2019.
[2] M. B. Ba and A. Gruber, “GOES multispectral rainfall algorithm (GMSRA),” Journal of Applied Meteorology, vol. 40, no. 8, pp. 1500-1514, 2001.
[3] K. l. Hsu et al., “Precipitation estimation from remotely sensed information using artificial neural networks,” Journal of Applied Meteorology and Climatology, vol. 36, no. 9, pp. 1176-1190, 1997.
[4] A. Akbari Asanjan et al., “Short‐term precipitation forecast based on the PERSIANN system and LSTM recurrent neural networks,” Journal of Geophysical Research: Atmospheres, vol. 123, no. 22, pp. 12,543-12,563, 2018.
[5] T. Vandal, E. Kodra, and A. R. Ganguly, “Intercomparison of machine learning methods for statistical downscaling: The case of daily and extreme precipitation,” Theoretical and Applied Climatology, vol. 137, no. 1, pp. 557-570, 2019.
[6] Y. Tao et al., “Precipitation identification with bispectral satellite information using deep learning approaches,” Journal of Hydrometeorology, vol. 18, no. 5, pp. 1271- 1283, 2017.
[7] Y. Liu et al., “Application of deep convolutional neural networks for detecting extreme weather in climate datasets,” May 2016. [Online] Available: https://arxiv.org/abs/1605.01156#.
[8] E. Shi et al., “A method of weather radar echo extrapolation based on convolutional neural networks,” in Proceedings of the International Conference on Multimedia Modeling, Bangkok, Thailand, 2018, pp. 16-28.
[9] G. Ayzel et al., “All convolutional neural networks for radar-based precipitation nowcasting,” Procedia Computer Science, vol. 150, pp. 186-192, 2019.
[10] X. Shi et al., “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” Advances in Neural Information Processing Systems, vol. 28, 2015.
[11] Y. Tao et al., “Deep neural networks for precipitation estimation from remotely sensed information,” in Proceedings of the 2016 IEEE Congress on Evolutionary Computation, British Columbia, Canada, 2016, pp. 1349-1355.
[12] Y. LeCun et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, 2012.
[14] M. I. Jordan, “Serial order: A parallel distributed processing approach,” Advances in Psychology, vol. 121, pp. 471-495, 1997.
[15] Y. Pu et al., “Variational autoencoder for deep learning of images, labels and captions,” Advances in Neural Information Processing Systems, vol. 29, 2016.
[16] I. Goodfellow et al., “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, 2014.
[17] Y. Hong et al., “Precipitation estimation from remotely sensed imagery using an artificial neural network cloud classification system,” Journal of Applied Meteorology, vol. 43, no. 12, pp. 1834-1853, 2004.
[18] H. Ashouri et al., “PERSIANN-CDR: Daily precipitation climate data record from multisatellite observations for hydrological and climate studies,” Bulletin of the American Meteorological Society, vol. 96, no. 1, pp. 69-83, 2015.
[19] A. Behrangi and Y. Wen, “On the spatial and temporal sampling errors of remotely sensed precipitation products,” Remote Sensing, vol. 9, no. 11, pp. 1127, 2017.
[20] G. Tang et al., “Documentation of multifactorial relationships between precipitation and topography of the Tibetan Plateau using spaceborne precipitation radars,” Remote Sensing of Environment, vol. 208, pp. 82-96, 2018.
[21] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The Bulletin of Mathematical Biophysics, vol. 5, no. 4, pp. 115-133, 1943.
[22] Q. Yuan et al., “Deep learning in environmental remote sensing: Achievements and
challenges,” Remote Sensing of Environment, vol. 241, 2020.
[23] N. Jmour, S. Zayen, and A. Abdelkrim, “Convolutional neural networks for image classification,” in Proceedings of the 2018 International Conference on Advanced
Systems and Electric Technologies, Hammamet, Tunisia, 2018, pp. 397-402.
[24] J. Bullock, C. Cuesta-Lázaro, and A. Quera-Bofarull, “XNet: A convolutional neural network (CNN) implementation for medical X-Ray image segmentation suitable for small datasets,” Medical Imaging 2019: Biomedical Applications in Molecular,
Structural, and Functional Imaging, vol. 10953, pp. 453-463, 2019.
[25] J. Redmon et al., “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
Las Vegas, Nevada, 2016, pp. 779-788.
[26] Y. Chen et al., “Deep feature extraction and classification of hyperspectral images
based on convolutional neural networks,” IEEE Transactions on Geoscience and
Remote Sensing, vol. 54, no. 10, pp. 6232-6251, 2016.
[27] C. Miao et al., “Evaluation of the PERSIANN-CDR daily rainfall estimates in
capturing the behavior of extreme precipitation events over China,” Journal of Hydrometeorology, vol. 16, no. 3, pp. 1387-1396, 2015.
[28] M. Sadeghi et al., “PERSIANN-CNN: Precipitation estimation from remotely sensed information using artificial neural networks–convolutional neural networks,” Journal of Hydrometeorology, vol. 20, no. 12, pp. 2273-2289, 2019.
[29] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” Nov 2014. [Online] Available: https://arxiv.org/abs/1411.1784.
[30] P. Isola et al., “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 1125-1134.
[31] N. Hayatbini et al., “Conditional generative adversarial networks (cGANs) for near real-time precipitation estimation from multispectral GOES-16 satellite imageries— PERSIANN-cGAN,” Remote Sensing, vol. 11, no. 19, pp. 2193, 2019.
[32] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention, Bavaria, German, 2015, pp. 234-241.

無法下載圖示 全文公開日期 2027/07/22 (校內網路)
全文公開日期 2032/07/22 (校外網路)
全文公開日期 2032/07/22 (國家圖書館:臺灣博碩士論文系統)
QR CODE