簡易檢索 / 詳目顯示

研究生: 張家綺
Chia-Chi Chang
論文名稱: 一個應用深度神經網路從手繪草圖到工整畫面的使用者介面自動生成方法:以網頁佈局為例
An Automatic GUI Generating Method from Hand-Drawn Sketch to Neat Tableau Based on Deep Neural Networks: A Case of Webpage Layout
指導教授: 范欽雄
Chin-Shyurng Fahn
口試委員: 黃榮堂
Jung-Tang Huang
王榮華
Jung-Hua Wang
馮輝文
Huei-Wen Ferng
學位類別: 碩士
Master
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 66/0/66
中文關鍵詞: 使用者介面手繪草圖深度神經網路網頁佈局使用者介面骨架超文本標記語言
外文關鍵詞: Graphical User Interface, Hand-Drawn Sketch, Deep Neural Networks, Webpage Layout, GUI Skeleton, HTML
相關次數: 點閱:263下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究提出一種網頁佈局的自動使用者介面生成方法,將手繪草圖轉化為工整的網頁畫面。該系統將有助於減少網頁開發過程中的人工和時間成本,並允許使用者快速建構網頁版面的設計原型。系統分別有物件偵測和使用者介面生成兩個主要部分;首先,利用深度神經網路在手繪草圖上偵測網頁元素,再進行特徵擷取和分類,同時預測個別網頁元素的邊界框。接著,系統將每一筆邊界框資訊轉譯成使用者介面骨架,最終生成網頁佈局的工整畫面。根據實驗結果,我們的系統只需2~3秒就將一張手繪草圖轉換為一個具有工整畫面的超文本標記語言(HTML)檔,而其網頁佈局生成之準確率為83.24%。此結果表明,我們提出的自動化使用者介面生成方法在網頁開發過程中實現的潛力很大,它將有助於使用者製作高品質的網頁佈局,如此可減少溝通錯誤、網頁開發成本和縮短網頁開發時間。


    This thesis presents a webpage GUI generator using hand-drawn sketches of webpage designs. This generator can reduce the labor and time costs in the web development process, and allow designers and developers to quickly generate webpage prototypes.
    There are two main sections including object detection and automatic GUI generation. First, webpage elements are detected by virtue of a deep neural network on a hand-drawn sketch. After feature extraction and classification of each webpage element, the bounding box of the element is subsequently predicted. The coordinates of the bounding box for each webpage element allow the system to generate a GUI skeleton for webpage development. In the experiments, the execution time of the automatic GUI generating process spends 2~3 seconds, and the accuracy of generating the neat tableau webpage layout reaches 83.24 % on an average.
    Our proposed method can be implemented for webpage development to generate high fidelity prototypes for clients to preview the designs. Such a method will reduce miscommunications, save webpage development costs, and shorten webpage development time.

    中文摘要 i Abstract ii 誌謝 iii Contents iv List of Figures v List of Tables vi Chapter 1 Introduction 1 1.1 Overview 1 1.2 Motivation 2 1.3 System Descriptions 3 1.4 Thesis organization 4 Chapter 2 Literature Review 5 2.1 Graphical User Interface 5 2.2 GUI Skeleton 6 2.3 Deep Neural Networks 8 Chapter 3 Object Detection Method 11 3.1 You Only Look Once (YOLO) 11 3.2 Anchor Boxes 13 3.3 Bounding Box Prediction 16 3.4 Convolutional Neural Network (CNN) 19 3.5 Residual blocks of Residual Network 24 3.6 Activation Function and Loss Function 28 Chapter 4 Automatic GUI Generating Method 31 4.1 GUI Skeleton Translator 31 4.2 Webpage Design Model-Bootstrap 35 4.3 Printed Layout Rendering 37 Chapter 5 Experimental Results and Discussions 38 5.1 Experimental Setup 38 5.1.1 The Dataset: The Imitation Sketches 39 5.1.2 The Dataset: Hand-Drawn Sketch 40 5.2 Results of Object Detection 43 5.3 Results of Neat Tableau Generation 47 5.4 Discussions on Experimental Results 50 Chapter 6 Conclusions and Future Works 52 6.1 Contribution and Conclusions 52 6.2 Limitation and Future Works 54 References 56

    [1] G. Nudelman, “Android design patterns: interaction design solutions for developers,” John Wiley & Sons, NYSE: JW.A, 2013.
    [2] S. E. S. Taba et al., “An exploratory study on the relation between user interface complexity and the perceived quality of android applications,” in Proceedings of the International Conference on Web Engineering, Aalborg, Denmark, pp.370-379, 2014.
    [3] J. M. Rivero et al., “Mockup-driven development: providing agile support for model-driven web engineering,” Information and Software Technology, vol. 56, no. 6, pp. 670-687, 2014
    [4] T. Memmel, B. Carsten, and R. Harald, “Model-driven prototyping for corporate software specification,” in Proceedings of the International Conference on Engineering for Human-Computer Interaction, Springer, Berlin, Germany, pp.158-174, 2007.
    [5] H. Trætteberg, “Model-based user interface design,” Norges Teknisk-Naturvitenskapelige Universitet, Trondheim, Norwegian, 2002.
    [6] C. Chen et al., “From UI design image to GUI skeleton: A neural machine translator to bootstrap mobile GUI implementation,” in Proceedings of the 40th International Conference on Software Engineering, Gothenburg, Sweden, pp. 665-676, 2018.
    [7] T. Beltramelli, “pix2code: Generating code from a graphical user interface screenshot,” in Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Paris, France, Article no. 3, 2018.
    [8] Microsoft, “sketch2code,” 20 March. 2019. [Online]. Available: https://sketch2code.azurewebsites.net
    [9] J. Redmon et al., “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada , pp. 779-788, 2016.
    [10] Z. Zou et al., “Object detection in 20 years: a survey,” arXiv:1905.05055, 2019.
    [11] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, pp. 7263-7271, 2017.
    [12] AY. Ng, “CS1674: Homework 9 - Programming,” 30 March. 2019.[Online]. Available: https://people.cs.pitt.edu/~kovashka/cs1674_fa16/hw9p.html
    [13] C. Szegedy et al., “Going deeper with convolutions. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,” Boston, MA, pp. 1-9, 2015.
    [14] J. Redmon, and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
    [15] K. He et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, pp. 770-778, 2016.
    [16] T. Y. Lin et al., “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, pp. 2117-2125, 2017.
    [17] B. Xu et al., “Empirical evaluation of rectified activations in convolutional network,” arXiv:1505.00853, 2015.
    [18] A. Kumar, “SketchCode,” 17 March. 2019. [Online]. https://github.com/ashnkumar/sketch-code
    [19] T. Lin, “LabelImg,” 11 May. 2019. [Online]. https://github.com/tzutalin/labelImg
    [20] V. I. Levenshtein, “Binary codes capable of correcting deletions, insertions, and reversals,” Soviet Physics, vol. 10, no. 8, pp. 707-710, 1966.

    無法下載圖示 全文公開日期 2024/07/30 (校內網路)
    全文公開日期 2024/07/30 (校外網路)
    全文公開日期 2029/07/30 (國家圖書館:臺灣博碩士論文系統)
    QR CODE