簡易檢索 / 詳目顯示

研究生: 林沿榆
Rina Savista Halim
論文名稱: Chinese Painting Koi Animation with Controllable Brush Stroke using Generative Adversarial Networks
Chinese Painting Koi Animation with Controllable Brush Stroke using Generative Adversarial Networks
指導教授: 姚智原
Chih-Yuan Yao
口試委員: 阮聖彰
Shanq-Jang Ruan
朱宏國
Hung-Kuo Chu
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 64
中文關鍵詞: animationChinese paintingNon-photorealistic RenderingGenerative Adversarial Networks
外文關鍵詞: animation, Chinese painting, Non-photorealistic Rendering, Generative Adversarial Networks
相關次數: 點閱:343下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • The process to produce a Chinese painting animation where the artists are
    required to create every frame might be time-consuming. We proposed a system to
    generate a Chinese painting animation along with some user-interactions automati-
    cally. The inputs for the system are 3D koi (decorative carp) and background, e.g.,
    lotus and leaf models. The system creates user-interactive ripples and streamlines
    caused by the swimming of the koi automatically. The system also provides a boat
    scene along with the wave-line caused by the movement of the boat.
    The user-interactive ripple utilizes both the mass-spring model and image space
    computation for ripple simulation proposed by Zhang and Yang [1] and Navier-
    Stokes equation as used by Stam [2]. For the streamline, its base shape is based on
    multiple continuous ripples with contour extraction applied. Closing and blur e ect
    is later applied to the base shape to create the nal streamline. The wave-line of
    the boat is created with the Navier-Stokes equation with added force and speci c
    velocity for every frame.
    Another feature of the system is koi's contours stroke. There are three styles
    of stroke available and four stroke-sizes provided. The stroke is generated with
    Generative Adversarial Networks (GANs). The generator in this network is based
    on the autoencoder model with skip-connection applied to some of its layers to keep
    the underlying feature of the input image. The discriminator has two tasks which are
    discriminating whether the image is real or fake and classifying the stroke-size. To
    discriminator is based on PatchGAN with an added fully-connected layer to classify
    the stroke-size.


    The process to produce a Chinese painting animation where the artists are
    required to create every frame might be time-consuming. We proposed a system to
    generate a Chinese painting animation along with some user-interactions automati-
    cally. The inputs for the system are 3D koi (decorative carp) and background, e.g.,
    lotus and leaf models. The system creates user-interactive ripples and streamlines
    caused by the swimming of the koi automatically. The system also provides a boat
    scene along with the wave-line caused by the movement of the boat.
    The user-interactive ripple utilizes both the mass-spring model and image space
    computation for ripple simulation proposed by Zhang and Yang [1] and Navier-
    Stokes equation as used by Stam [2]. For the streamline, its base shape is based on
    multiple continuous ripples with contour extraction applied. Closing and blur e ect
    is later applied to the base shape to create the nal streamline. The wave-line of
    the boat is created with the Navier-Stokes equation with added force and speci c
    velocity for every frame.
    Another feature of the system is koi's contours stroke. There are three styles
    of stroke available and four stroke-sizes provided. The stroke is generated with
    Generative Adversarial Networks (GANs). The generator in this network is based
    on the autoencoder model with skip-connection applied to some of its layers to keep
    the underlying feature of the input image. The discriminator has two tasks which are
    discriminating whether the image is real or fake and classifying the stroke-size. To
    discriminator is based on PatchGAN with an added fully-connected layer to classify
    the stroke-size.

    Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Chinese Painting Animation . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Producing Chinese Painting without Deep Learning . . . . . . . . . . 6 2.3 Non-Photorealistic Rendering Approach to Create Stroke-based Paint- ing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . 8 2.5 Application of Generative Adversarial Networks on Generating Art . 10 3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1 Koi Animation Framework . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 Stroke Synthesis with Generative Adversarial Network . . . . . . . . 15 4 Koi Animation Design System . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.1 User-Interactive Ripple Generation . . . . . . . . . . . . . . . . . . . 18 4.2 Streamline Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Wave-line on Boat Scene . . . . . . . . . . . . . . . . . . . . . . . . . 23 5 Controllable Stroke-size with Generative Adversarial Networks . . . . . . . 24 5.1 Data Collection for Training and Testing Process . . . . . . . . . . . 24 5.2 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.2.1 Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.2.2 Discriminator . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.3 Loss Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.3.1 Adversarial Loss . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.3.2 Classi cation Loss . . . . . . . . . . . . . . . . . . . . . . . . 32 5.3.3 L1 Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.3.4 Full Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.4 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6 Result and Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6.1 Koi Animation System Result . . . . . . . . . . . . . . . . . . . . . . 36 6.2 Stroke Synthesis Result . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.3 Koi Animation Framework with Stroke Synthesis . . . . . . . . . . . 41 7 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 vi

    [1] Xinghua Zhang and Gang Yang. Ripple simulation based on mass-spring model
    and image space computation. IEEE, 3rd International Congress on Image and
    Signal Processing, 2:553{557, 2010.
    [2] Jos Stam. Real-time
    uid dynamics for games. 2003.
    [3] Songhua Xu, Yingqing Xu, Sing Bing Kang, David H. Salesin, Yunhe Pan,
    and Heung-Yeung Shum. Animating chinese paintings through stroke-based
    decomposition. ACM Transactions on Graphics (TOG), 25(2):239{267, 2006.
    [4] Yu-Chi Lai, Bo-An Chen, Kuo-Wei Chen, Wei-Lin Si, Chih-Yuan Yao, and
    Eugene Zhang. Data-driven npr illustrations of natural
    ows in chinese painting.
    IEEE Transactions on Visualization and Computer Graphics, 23(12), 2017.
    [5] Nelson S. H. Chu and Chiew-Lan Tai. Real-time painting with an expressive
    virtual chinese brush. IEEE Computer Graphics and Applications, 24(5):76{85,
    2004.
    [6] Ka Wai Kwok, Sheung Man Wong, Ka Wah Lo, and Yeung Yam. Genetic
    algorithm-based brush stroke generation for replication of chinese calligraphic
    character. pages 1057{1064, 2006.
    [7] Ning Xie, Tingting Zhao, Feng Tian, Xiaohua Zhang, and Masashi Sugiyama.
    Stroke-based stylization learning and rendering with inverse reinforcement
    learning. In Proceedings of the 24th International Conference on Arti cial In-
    telligence, IJCAI'15, pages 2531{2537, 2015.
    [8] Aaron Hertzmann. Painterly rendering with curved brush strokes of multiple
    sizes. In Proceedings of the 25th Annual Conference on Computer Graphics and
    Interactive Techniques, SIGGRAPH '98, pages 453{460, 1998.
    [9] Aaron Hertzmann. Fast paint texture. In Proceedings of the 2Nd International
    Symposium on Non-photorealistic Animation and Rendering, NPAR '02, pages
    91{ , 2002.
    [10] Rundong Wu, Zhili Chen, Zhaowen Wang, Jimei Yang, and Steve Marschner.
    Brush stroke synthesis with a generative adversarial network driven by phys-
    ically based simulation. In Proceedings of the Joint Symposium on Com-
    putational Aesthetics and Sketch-Based Interfaces and Modeling and Non-
    Photorealistic Animation and Rendering, Expressive '18, pages 12:1{12:10,
    2018.
    [11] Ahmed M. Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Maz-
    zone. Can: Creative adversarial networks, generating "art" by learning about
    styles and deviating from style norms. In ICCC, 2017.
    [12] Brushstrokes: Styles and techinques of chinese painting. Retrieved Jan
    4,2019, from theWorld WideWeb: http://education.asianart.org/sites/
    asianart.org/files/resource-downloads/Brushstrokes.pdf.
    [13] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image
    translation with conditional adversarial networks. In 2017 IEEE Conference on
    Computer Vision and Pattern Recognition (CVPR), pages 5967{5976, 2017.
    [14] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-
    to-image translation using cycle-consistent adversarial networks. In 2017 IEEE
    International Conference on Computer Vision (ICCV), pages 2242{2251, 2017.
    [15] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and
    Jaegul Choo. Stargan: Uni ed generative adversarial networks for multi-domain
    image-to-image translation. In The IEEE Conference on Computer Vision and
    Pattern Recognition (CVPR), June 2018.
    [16] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional
    networks for biomedical image segmentation. In Medical Image Computing
    and Computer-Assisted Intervention { MICCAI 2015, pages 234{241. Springer
    International Publishing, 2015.
    [17] Ning Xie, Hirotaka Hachiya, and Masashi Sugiyama. Artist agent: A reinforce-
    ment learning approach to automatic stroke generation in oriental ink painting.
    In In Proceedings of the International Conference on Machine Learning, pages
    153{160, 2012.
    [18] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-
    Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative ad-
    versarial nets. In Proceedings of the 27th International Conference on Neural
    Information Processing Systems - Volume 2, NIPS'14, pages 2672{2680, 2014.
    [19] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to
    generate chairs with convolutional neural networks. In 2015 IEEE Conference
    on Computer Vision and Pattern Recognition (CVPR), pages 1538{1546, 2015.
    [20] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representa-
    tion learning with deep convolutional generative adversarial networks. arXiv
    preprint, arXiv:1511.06434, 2015.
    [21] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets.
    arXiv preprint, arXiv:1411.1784, 2014.
    [22] Bin He, Feng Gao, Daiqian Ma, Boxin Shi, and Ling-Yu Duan. Chipgan: A
    generative adversarial network for chinese ink wash painting style transfer. In
    Proceedings of the 26th ACM International Conference on Multimedia, MM '18,
    pages 1172{1180, 2018.
    [23] Phillip Pan. Non-photorealistic rendering: Interactive chinese painting of
    yangzhou school painting koi. Master's thesis, National Taiwan University of
    Science and Technology, 2018.
    [24] 桃花源記. Retrieved Jul 20,2018, from the World Wide Web:
    https://www.youtube.com/watch?v=aH-tJerdw7c&index=5&list=
    PLn57IaFWmCQzoy2LVuslh6GMKOUwpWFAx, 2003.

    無法下載圖示 全文公開日期 2024/02/11 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE