研究生: |
曾文一 Wen-Yi Zeng |
---|---|
論文名稱: |
具有最佳鑑別器的Wasserstein對抗生成網路 Wasserstein GAN with optimal discriminator |
指導教授: |
謝松年
Sung-Nien Hsieh |
口試委員: |
林士駿
Shih-Chun Lin 黃昱智 Yu-Chih Huang |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 電子工程系 Department of Electronic and Computer Engineering |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 英文 |
論文頁數: | 39 |
中文關鍵詞: | 對抗生成網路 、鑑別器 、最佳 、傳輸 |
外文關鍵詞: | Wasserstein, GAN, optimal, discriminator |
相關次數: | 點閱:392 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在此篇論文中,我們的目的是尋找判別器函數的替代方法。GAN 的目標是找到一個分佈能夠接近目標數據分佈的函數。判別器則是是為訓練生成模型而提出的對抗性設計,訓練生成模型可以視為在高維空間中尋找特定分佈的問題。Wasserstein距離則作為計算散度的解決方案。我們可以通過使用最優傳輸函數找到一個近似判別器的函數。通過使用近似的判別器函數,訓練模型不再需要更新鑑別器的權重以避免訓練不穩定的情況發生。我們主要著重於一維的情況,數值積分的示例表明我們的封閉形式 WGAN 參數,使用經驗數據具有良好的收斂行為,即使在拉普拉斯分佈下也是如此。
In this thesis, we look for an alternative to the discriminator function.
The goal of GAN is how to find a function whose distribution must be close to the distribution of the target data.
Discriminator is an adversarial design proposed to train the generative model, and training the generative model is a problem of finding a specific distribution in a high-dimensional space.
Wasserstein distance was introduced and viewed as a solution for computing divergence.
We can find a function that approximates the discriminator by using the optimal transport function.
Through the use of it, it is no longer necessary to update the weights of the discriminator to avoid training instability.
We mainly focus on the one-dimensional case, and numerical examples show that our closed-form WGAN parameters have good convergence behavior using empirical data, even under the Laplace distribution.
[1] V. Hartmann and D. Schuhmacher, “Semi-discrete optimal transport-the case p= 1,” arXiv
preprint arXiv:1706.07650, 2017.
[2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,
and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing
systems, vol. 27, 2014.
[3] M. Arjovsky and L. Bottou, “Towards principled methods for training generative adversarial networks,” arXiv preprint arXiv:1701.04862, 2017.
[4] S. Feizi, F. Farnia, T. Ginart, and D. Tse, “Understanding gans in the lqg setting: Formulation, generalization and stability,” IEEE Journal on Selected Areas in Information
Theory, vol. 1, no. 1, pp. 304–311, 2020.
[5] H. Bhatia, W. Paul, F. Alajaji, B. Gharesifard, and P. Burlina, “Least k th-order and
r´enyi generative adversarial networks,” Neural Computation, vol. 33, no. 9, pp. 2473–2510,
2021.
[6] G. Peyr´e, M. Cuturi, et al., “Computational optimal transport: With applications to data
science,” Foundations and Trends® in Machine Learning, vol. 11, no. 5-6, pp. 355–607,
2019.
[7] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,”
in International conference on machine learning, pp. 214–223, PMLR, 2017.
[8] L. Ambrosio and A. Pratelli, “Existence and stability results in the l 1 theory of optimal
transportation,” in Optimal transportation and applications, pp. 123–160, Springer, 2003.
[9] M. Sanjabi, J. Ba, M. Razaviyayn, and J. D. Lee, “On the convergence and robustness of
training gans with regularized optimal transport,” arXiv preprint arXiv:1802.08249, 2018.
[10] O. Shayevitz and M. Feder, “Optimal feedback communication via posterior matching,”
IEEE Transactions on Information Theory, vol. 57, no. 3, pp. 1186–1222, 2011.