研究生: |
康晉睿 Jin-Rui Kang |
---|---|
論文名稱: |
營建工程模擬結果校核之研究 RESEARCH ON CALIBRATION OF SIMULATION RESULTS IN CONSTRUCTION ENGINEERING |
指導教授: |
呂守陞
Sou-Sen Leu |
口試委員: |
張家瑞
Jia-Ruei Chang 謝佑明 Yo-Ming Hsieh 洪嫦闈 Chang-Wei Hung 呂守陞 Sou-Sen Leu |
學位類別: |
碩士 Master |
系所名稱: |
工程學院 - 營建工程系 Department of Civil and Construction Engineering |
論文出版年: | 2023 |
畢業學年度: | 111 |
語文別: | 中文 |
論文頁數: | 41 |
中文關鍵詞: | 最大期望值演算法 、基因演算法 、誤差分離 、模型誤差 、測量誤差 |
外文關鍵詞: | Expected Maximum Algorithm, Genetic Algorithm, Error Separation, Model Error, Measurement Error |
相關次數: | 點閱:373 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著科學和工程技術的進步,數值模擬在各個領域中變得越來越重要。然而,因為模型誤差和測量誤差的存在,使數據和真值產生差異。模型誤差源自於設計和分析所使用的數學模型與實際情況間的差異。測量誤差源自於在收集數據時,由於測量儀器的精確度、人為操作的不確定性以及環境條件的變化。在營建領域中,數值模擬和真值產生偏差的來源,除了上面提到輸入進模型的測量誤差,模型本身可能因為數學模型選擇不當、過擬合或欠擬合、假設錯誤或者參數估計錯誤等原因,導致產生模型誤差。現今已發展出許多方法減少誤差,像是集成學習的模型融合技術,還有卡爾曼濾波器。
本研究引入混合比例的概念,假設資料由兩組分別包含模型誤差之常態分佈混合包含測量誤差之常態分佈。並分別使用最大期望值演算法(Expectation-maximization algorithm)和基因演算法(Genetic algorithm)進行估算,得到兩分佈相關參數的最適估計。結果發現可以一定程度的分離模型誤差和測量誤差分布,達到減少誤差的最終目的。
With the advancement of technology, numerical simulations have become crucial in various fields. However, both model errors and measurement errors lead to disparities between data and actual values. Model errors arise from mathematical models used for predictions not aligning with real situations, while measurement errors stem from instrument accuracy, operational uncertainties, and environmental changes. In the realm of construction, sources of discrepancies between numerical simulations and actual outcomes are not only rooted in the aforementioned measurement errors incorporated into the model inputs, but also in the model itself. Model errors can arise due to improper mathematical model selection, overfitting or underfitting, erroneous assumptions, or inaccurate parameter estimations. Nowadays, several methods have been developed to mitigate these errors, such as model fusion techniques through integrated learning and the utilization of Kalman filters.
This study introduces the concept of mixed proportions, assuming data originates from two normal distributions encompassing both model and measurement errors. The parameters of these two normal distributions are estimated using Expectation-Maximization (EM) and Genetic Algorithms (GA). The results reveal a certain level of separation between the two error distributions, thereby achieving the goal of error reduction.
[1] Viswanathan, M. (2005). Measurement error and research design. Sage.
[2] Kulkarni, V.G. (2016). Modeling and Analysis of Stochastic Systems. Chapman and Hall/CRC, New York.
[3] Buonaccorsi, J. P. (2010). Measurement Error: Models, Methods, and Applications.
USA: CRC Press.
[4] Blair, G., Chou, W., & Imai, K. (2019). “List experiments with measurement error.” Political Analysis, 27(4), 455-480.
[5] Lockwood, J. R., & McCaffrey, D. F. (2014). “Correcting for test score measurement error in ANCOVA models for estimating treatment effects.” Journal of Educational and Behavioral Statistics, 39(1), 22-52.
[6] Vapnik, V. (1999). The nature of statistical learning theory. Springer science & business media.
[7] Smith, R. C. (2013). Uncertainty Quantification: Theory, Implementation, and Applications. USA: SIAM.
[8] Kroese, D. P., & Rubinstein, R. Y. (2012). “Monte carlo methods.” Wiley Interdisciplinary Reviews: Computational Statistics, 4(1), 48-58.
[9] Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., Liu, L., Ghavamzadeh, M., ... & Nahavandi, S. (2021). “A review of uncertainty quantification in deep learning: Techniques, applications and challenges.” Information fusion, 76, 243-297.
[10] Tagasovska, N., & Lopez-Paz, D. (2019). “Single-model uncertainties for deep learning.” Advances in Neural Information Processing Systems, 32.
[11] Breiman, L. (1996). “Bagging predictors.” Machine Learning, 24, 123–140.
[12] Taherkhani, A., Cosma, G., & McGinnity, T. M. (2020). “AdaBoost-CNN: An adaptive boosting algorithm for convolutional neural networks to classify multi-class imbalanced datasets using transfer learning.” Neurocomputing, 404, 351-366.
[13] Sagi, O., & Rokach, L. (2018). “Ensemble learning: A survey.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4), e1249.
[14] Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). “Maximum Likelihood from Incomplete Data via the EM Algorithm.” Journal of the Royal Statistical Society. Series B (Methodological), 39(1), p. 1-38.
[15] McLachlan, G. J. and Krishnan, T. (1997). The EM Algorithm and Extensions. Wiley.
[16] 何立文(2012). 使用廣義高斯模型於未知聲源數之訊號分離. (碩士論文。國立交通大學)臺灣博碩士論文知識加值系統。
[17] Dempster, A. P., & EM Group. (1992). The EM algorithm for sentence processing. Proceedings of the 10th conference on Computational linguistics-Volume 1, 151-156.
[18] Felzenszwalb, P.F., Huttenlocher, D.P. (2006). “Efficient Belief Propagation for Early Vision.” Int J Comput Vision 70, 41–54.
[19] Law, K., Stuart, A., & Zygalakis, K. (2015). “Data assimilation.” Cham, Switzerland: Springer, 214, 52.
[20] Kalman, R. (1960). A New Approach to Linear Filtering and Prediction Problems. ASME Journal of Basic Engineering, 82, 35-45.
[21] Evensen, G. (1994). “Sequential Data Assimilation with a Nonlinear Quasi-Geostrophic Model Using Monte Carlo Methods to Forecast Error Statistics.” Journal of Geophysical Research, 99, 10143-10162.
[22] Fuller, W. A. (2009). Measurement error models. John Wiley & Sons.
[23] Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor.
[24] Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, New York.
[25] Houck, C.R. (2001). A Genetic Algorithm for Function Optimization: A Matlab Implementation.
[26] Deb, K., Agrawal, S., Pratap, A., & Meyarivan, T. (2002). “A fast and elitist multiobjective genetic algorithm: NSGA-II.” IEEE transactions on evolutionary computation, 6(2), 182-197.
[27] Eberhart, R., & Kennedy, J. (1995). “A new optimizer using particle swarm theory.” MHS'95. Proceedings of the sixth international symposium on micro machine and human science. Nagoya, Japan, pp. 39-43
[28] Redner, R. A., & Walker, H. F. (1984). “Mixture densities, maximum likelihood, and the EM algorithm.” SIAM review, 26(2), 195-239.