簡易檢索 / 詳目顯示

研究生: 王德修
Te-Hsiu Wang
論文名稱: NATE:基於神經網路注意力機制的震源推測架構
NATE: A Neural Attention-based HypocenterEstimation Framework
指導教授: 金台齡
Tai-Lin Chin
口試委員: 吳逸民
Yih-Min Wu
陳冠宇
Kuan-Yu Chen
陳達毅
Da-Yi Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 55
中文關鍵詞: 深度學習神經網路注意力震源推測
外文關鍵詞: deep learning, neural attention, hypocenter estimation
相關次數: 點閱:354下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於歐亞板塊與菲律賓海板塊的頻繁碰撞,台灣處於世界上一個非常活躍的地震帶,過去許多具破壞性的地震造成了嚴重的人員傷亡,為了減輕地震帶來的危害,台灣中央氣象局已運行強震即時警報系統,以便在強烈地震到來前發出警報。現行的強震即時警報系統使用逆推算法迭代地獲得地震的位置和發生時間,而在即時預警中,為了盡快發出地震警報,強震即時警報系統僅利用了震源附近幾個站的P波到時進行推算,由於缺少足夠測站的P波到時來逆推算震源,因此定位的結果可能不穩定。在本研究中,我們提出了一種基於注意力機制的神經網絡模型來改進強震即時警報系統,該模型使用2016年至2017年的地震事件進行訓練,並以2018年的地震進行評估,在實驗中,平均震央誤差和深度誤差的平均值分別約為2.37KM和3.20KM。


    Taiwan is in a very active seismic zone in the world because of the strong collision of the Eurasian Plate and the Philippine Sea Plate. In the past, many damaged earthquakes caused severe casualties. To mitigate the hazard, the earthquake early warning (EEW) system has been operated by the Central Weather Bureau of Taiwan (CWB) to issue alarms before the strong ground shakings. Currently, the EEW system used inverse method to iteratively obtained locations and origin times of earthquakes. In order to issue a timely earthquake alarm, the EEW system only takes advantage of the P-wave arrivals from few stations nearby the epicenter. However, the results from the EEW system may not stable due to lack of enough P-wave arrivals to solve the inverse problem during the procedure of location estimations. To mitigate the problem, we propose a neural attention-based hypocenter estimation framework (NATE) to improve the current EEW system in this paper. The proposed framework is trained by earthquakes occurred from 2016 to 2017 and evaluated by those in 2018. In the simulation test, the average epicenter error and depth error in average are about 2.37KM and 3.20KM, respectively.

    Abstract in Chinese . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract in English . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . v Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 A Neural Attention-based Hypocenter Estimation Framework . . 7 3.1 The Proposed Methodology . . . . . . . . . . . . . . . . . 7 3.2 Input Block . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 Inference Block . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Prediction Block . . . . . . . . . . . . . . . . . . . . . . 12 3.5 The Design of the Training Objective . . . . . . . . . . . 15 4 System Configuration . . . . . . . . . . . . . . . . . . . . . . . 17 4.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 Parameter selection . . . . . . . . . . . . . . . . . . . . . 22 4.2.1 Loss function . . . . . . . . . . . . . . . . . . . . 22 4.2.2 Number of neurons . . . . . . . . . . . . . . . . . 24 4.2.3 Number of self-attentions . . . . . . . . . . . . . 25 4.2.4 Number of transformer units . . . . . . . . . . . . 25 4.2.5 Number of attentions . . . . . . . . . . . . . . . . 26 5 Performance Evaluations . . . . . . . . . . . . . . . . . . . . . 28 5.1 Performance of NATE . . . . . . . . . . . . . . . . . . . 28 5.2 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.3 Potential for early warning . . . . . . . . . . . . . . . . . 31 5.4 Comparison with traditional algorithm . . . . . . . . . . . 33 5.5 Generalization to real operation . . . . . . . . . . . . . . . 37 5.6 Comparison with TCPD . . . . . . . . . . . . . . . . . . 40 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    [1] T. Perol, M. Gharbi, and M. Denolle, “Convolutional neural network for earthquake detection and
    location,” Science Advances, vol. 4, no. 2, p. e1700578, 2018.
    [2] M. Kriegerowski, G. M. Petersen, H. Vasyura-Bathke, and M. Ohrnberger, “A deep convolutional
    neural network for localization of clustered earthquakes based on multistation full waveforms,” Seismological
    Research Letters, vol. 90, no. 2, pp. 510–516, 2018.
    [3] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and
    translate,” International Conference on Learning Representations, 2015.
    [4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, ￿. Kaise, and I. Polosukhin,
    “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30,
    no. 1, pp. 5998–6008, 2017.
    [5] D.-Y. Chen, N.-C. Hsiao, and Y.-M. Wu, “The earthworm based earthquake alarm reporting system
    in Taiwan,” Bulletin of the Seismological Society of America, vol. 105, no. 2A, pp. 568–579, 2015.
    [6] B. R. Lienert, E. Berg, and L. N. Frazer, “Hypocenter: An earthquake location method using centered,
    scaled, and adaptively damped least squares,” Bulletin of the Seismological Society of America, vol. 76,
    no. 3, pp. 771–783, 1986.
    [7] W. Rodi, “Grid-search event location with non-gaussian error models,” Physics of the Earth and Planetary
    Interiors, vol. 158, no. 1, pp. 55–66, 2006.
    [8] G. L. Pavlis, F. Vernon, D. Harvey, and D. Quinlan, “The generalized earthquake-location (genloc)
    package an earthquake-location library,” Computers Geosciences, vol. 30, pp. 1079–1091, 2004.
    [9] F. Waldhauser and W. E. Ellsworth, “A double-difference earthquake location algorithm: Method and
    application to the northern hayward fault, california,” Bulletin of the Seismological Society of America,
    vol. 90, no. 6, pp. 1353–1368, 2000.
    [10] H. Guo and H. Zhang, “Development of double-pair double difference earthquake location algorithm
    for improving earthquake locations,” Geophysical Journal International, vol. 208, no. 1, pp. 333–348,
    2017.
    [11] G. Lin, “The source ￿ specific station term and waveform cross￿ correlation earthquake location
    package and its applications to california and new zealand,” Seismological Research Letters, vol. 89,
    no. 5, pp. 1877–1885, 2018.
    [12] R. M. Allen, P. Gasparini, O. Kamigaichi, and M. Böse, “The status of earthquake early warning
    around the world: An introductory overview,” Seismological Research Letters, vol. 80, no. 5, pp. 682–
    693, 2009.
    [13] D. Sheen, J. Park, H. Chi, E. Hwang, I. Lim, Y. J. Seong, and J. Pak, “The first stage of an earthquake
    early warning system in South Korea,” Seismological Research Letters, vol. 88, no. 6, pp. 1491 –
    1498, 2017.
    [14] O. Kamigaichi, M. Saito, K. Doi, T. Matsumori, S. Tsukada, K. Takeda, T. Shimoyama, K. Nakamura,
    M. Kiyomoto, and Y. Watanabe, “Earthquake early warning in Japan: Warning the general public and
    future prospects,” Seismological Research Letters, vol. 80, no. 5, pp. 717–726, 2009.
    [15] Y.-M. Wu, D.-Y. Chen, T.-L. Lin, C.-Y. Hsieh, T.-L. Chin, W.-Y. Chang, W.-S. Li, and S.-H. Ker, “A
    high-density seismic network for earthquake early warning in Taiwan based on low cost sensors,”
    Seismological Research Letters, vol. 84, no. 6, pp. 1048–1054, 2013.
    [16] X. Zhang, J. Zhang, C. Yuan, S. Liu, Z. Chen, and W. Li, “Locating earthquakes with a network of
    seismic stations via a deep learning method,” ArXiv e-prints, 2018.
    [17] L. H. Ochoa, L. F. Niño, and C. A. Vargas, “Fast estimation of earthquake epicenter distance using a
    single seismological station with machine learning techniques,” Dyna, vol. 85, no. 204, pp. 161–168,
    2018.
    [18] H. S. Kuyuk and O. Susumu, “Real-time classification of earthquake using deep learning,” Procedia
    Computer Science, vol. 140, pp. 298–305, 2018.
    [19] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” ArXiv e-prints, 2014.
    [20] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
    M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser,
    M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,
    B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals,
    P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-Scale Machine
    Learning on Heterogeneous Distributed Systems,” ArXiv e-prints, Mar. 2016.

    無法下載圖示 全文公開日期 2025/08/11 (校內網路)
    全文公開日期 2025/08/11 (校外網路)
    全文公開日期 2025/08/11 (國家圖書館:臺灣博碩士論文系統)
    QR CODE