簡易檢索 / 詳目顯示

研究生: 柳紹民
Shao-Min Liu
論文名稱: 基於主動學習的功率估計及應用
Active Learning-based Power Estimation and Applications
指導教授: 方劭云
Shao-Yun Fang
口試委員: 劉一宇
Yi-Yu Liu
呂學坤
Shyue-Kung Lu
陳勇志
Yung-Chih Chen
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 51
中文關鍵詞: 主動學習機器學習功耗估計K-meanSeq2seq
外文關鍵詞: Active learning, Machine learning, Power estimation, K-mean, Seq2seq
相關次數: 點閱:299下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

現今超大型積體電路(VLSI)設計為了滿足電路對於性能、面積和功率消耗嚴格的要求,如何在設計流程早期階段提供準確的功率消耗估算,對於現代超大型積體電路設計中晶片上系統(SoC)的設計探索和驗證至關重要。先前的研究作法利用暫存器的轉態頻率(switching activity)來估計功率消耗,並且利用測得的參數進行曲線擬合技術(例如回歸等方式)或是使用深度學習方式為每個子電路建立功率估計模型,來估計電路的功率消耗,然而先前作法沒有考慮到在訓練階段取得模型訓練標籤是相當費時的,如何有效減少訓練資料的數量,以及提升模型的預測準確率是我們的目標。本篇論文使用基於主動式深度學習方法進行設計流程早期的功率分析,並提出兩大改善方法。第一,使用主動學習方法(Active learning)對所有資料進行分群,並挑選適合的訓練資料,減少取得模型訓練標籤時所需消耗大量時間;第二,本篇論文對於故障功率(Glitch Power)提出新的資料特徵表示方法,以及對於多周期延遲路徑(Multi-cycle delay path)問題提出RNN 自動編碼器(Auto-encoder)解決方法以提升深度學習估計準確率。透過實驗結果顯示使用我們提出的資料特徵以及主動學習的挑選機制,可以使測試資料有效的收斂,並且提升總體的準確率,對於小的電路在挑選10%訓練資料下最多可以提升32%準確率,對於另外兩個較大的電路在挑選40%訓練資料下,準確率分別提升了75.9%以及47.5%。分別評估加入故障功率或是加入RNN自動編碼器對於準確率的影響,故障功率提出的特徵資料表示法,在小電路測試中挑選80%訓練資料下可以再額外提升46.2%準確率;使用RNN自動編碼器則可以在較大電路中挑選20%訓練資料下再額外提升44.9%準確率。


In order to meet the stringent requirements of circuit performance, area and power consumption in current VLSI designs, how to provide accurate power estimation in the early stage of the design flow is very important for the design exploration and verification of the system-on-chip (SoC) in modern ultra-large integrated circuits. Previous research methods used the switching activities of registers to estimate power consumption, and they used the measured parameters to perform curve fitting techniques (such as regression) or use deep learning method to build a power estimation model for each sub-circuit to estimate the circuit power consumption. However, the previous methods did not take into account that it is very time-consuming to obtain model training labels in the training phase. How to effectively reduce the numbers of training data and improve the accuracy motivates this research work. We use active learning-based deep learning method to perform power estimation in the early stage design flow, and propose two enhancement methods. First, we use active learning to cluster all data and select suitable training data to reduce the time required to obtain model training labels; Second, we propose a new feature representation method for Glitch Power, and an RNN Auto-encoder solution for the multi-cycle delay path problem is proposed to improve the accuracy of power estimation. The experimental results show that the use of our proposed features representation and active learning selection mechanism can effectively converge the testing data and improve the overall accuracy. For small circuit, selecting 10% of the training data can increase the accuracy by 32%. For the other two larger circuits, selecting 40% of the training data and the accuracy is increased by 75.9% and 47.5%. Consider the additional effects of using glitch power and RNN auto-encoders respectively. Considering the feature data representation method proposed by the glitch power, selecting 80% of the training data in the small circuit can further increase the accuracy by 46.2%. Considering the use of RNN auto-encoder can further improve the accuracy of 44.9% in larger circuits, when selecting 20% of the training data.

Chapter 1. Introduction 1.1 Background 1.2 Motivation 1.3 Related Work 1.3.1 General Methodology 1.3.2 ML-based Power Model 1.4 Contributions 1.5 Thesis Organization Chapter 2. Preliminaries 2.1 Active Learning 2.2 K-mean Algorithm 2.3 RNN Auto-encoder Method 2.4 Power Consumption Type Chapter 3. Proposed Enhancement Method 3.1 Proposed Model 3.2 New Features Representation 3.3 Multi-cycle Problem (RNN Method) 3.4 Training Flow 3.4.1 Overview 3.4.2 Training Flow - Active Learning 3.4.3 Training Flow - Initial Error Chapter 4. Experimental Results 4.1 Environment and Benchmarks 4.2 Results 4.2.1 Evaluate the Active Learning Method 4.2.2 Evaluate the Glitch Power Representation Method 4.2.3 Evaluate the RNN Auto-encoder Method 4.2.4 Run Time Analysis 4.2.5 Three Types of Power Consumption Chapter 5. Conclusion

[1] Alessandro Bogliolo, Luca Benini, Giovanni De Micheli, "Regression-based RTL power modeling," in ACM Transactions on Design Automation of Electronic Systems, pp. 337-372, 2000.
[2] J. H. Anderson and F. N. Najm, "Power estimation techniques for FPGAs," in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, pp.1015-1027, 2004
[3] Jianlei Yang, Liwei Ma, Kang Zhao, Yici Cai and Tin-Fook Ngai, "Early stage real-time SoC power estimation using RTL instrumentation," in The 20th Asia and South Paci c Design Automation Conference, pp. 779-784, 2015
[4] Yuan Zhou, Haoxing Ren, Yanqing Zhang, Ben Keller, Brucek Khailany, Zhiru Zhang, "PRIMAL: Power Inference using Machine Learning," in DAC '19: Proceedings of the 56th Annual Design Automation Conference, 2019
[5] Yanqing Zhang, Haoxing Ren, Brucek Khailany, "GRANNITE: graph neural network inference for transferable power estimation," in DAC '20: Proceedings of the 57th ACM/EDAC/IEEE Design Automation Conference, 2020
[6] Ozan Sener, Silvio Savarese, "Active learning for convolutional neural networks: A core-set approach," in International Conference on Learning Representations, 2018
[7] Burr Settles, "Active Learning Literature Survey," in Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2009
[8] Yasi Wang, Hongxun Yao, Sicheng Zhao, Ying Zheng, "Dimensionality reduction strategy based on auto-encoder," in ICIMCS '15: Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, 2015
[9] David D. Lewis, William A. Gale, "A Sequential Algorithm for Training Text Classi ers," in Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 3-12, 1994
[10] S. Hochreiter and J. Schmidhuber, "Long Short-Term Memory," in Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997
[11] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation," in Proceedings of the Empiricial Methods in Natural Language Processing, 2014
[12] "Keras: The Python Deep Learning library. https://keras.io/," 2018
[13] F. Pedregosa et al, "Scikit-Learn: Machine Learning in Python. Journal of Machine Learning Research," 2011
[14] "Synopsys PrimeTime PX. https://www.synopsys.com/support/training/signo /primetimepx-fcd.html. "

QR CODE