研究生: |
林世鑫 Shih-Hsin Lin |
---|---|
論文名稱: |
CUDABlock:CUDA圖形介面工具 CUDABlock:CUDA GUI Tool |
指導教授: |
黃元欣
Yuan-Shin Hwang |
口試委員: |
黃冠寰
Gwan-Hwan Hwang 謝仁偉 Jen-Wei Hsieh |
學位類別: |
碩士 Master |
系所名稱: |
電資學院 - 資訊工程系 Department of Computer Science and Information Engineering |
論文出版年: | 2014 |
畢業學年度: | 103 |
語文別: | 中文 |
論文頁數: | 49 |
中文關鍵詞: | CUDA 、GPGPU 、OpenBlock 、圖形程式語言 |
外文關鍵詞: | CUDA, GPGPU, OpenBlock, graphical programming |
相關次數: | 點閱:334 下載:1 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
CUDA是一種高效能計算的語言,其架構是GPGPU (General-purpose computing on graphics processing units Processing Unit),利用處理圖形任務的圖形處理器來計算原本由中央處理器處理的通用計算任務,這些通用計算常常與圖形處理沒有任何關係,其應用於數據處理的運算量遠大於數據調度和傳輸的需要時,通用圖形處理器在性能上大大超越了傳統的中央處理器應用程序。
CUDA的平行程式其平行架構為SIMT(Single Instruction Multiple Threads) ,概念上平行語言不是一種循序的語言,其寫法對於一般在寫C C++等循序語言的程式設計師來說,門檻相對高很多 。
在本篇論文中,將實作一個以OpenBlock為基礎的圖形介面工具,其機制為,拉取程式塊,以邏輯的方式編排程式塊,來自動產生CUDA部分的困難程式碼,藉以簡化CUDA程式,並降低程式可能會出錯的機率。
Graphics Processing Units (GPUs) have recently gained wide popularity among researchers and developers as accelerators for applications outside the domain of traditional computer graphics.
Parallel programming is difficult because it was typically defined as making many CPUs work together (as in a cluster). Desktop applications have been slow to take advantage of multi-core CPUs due to the difficulty of splitting a single program into one that works across multiple threads. These difficulties arise from the fact that a CPU is inherently a serial processor and having multiple CPUs require complex software to manage them.
We implemented a tool called CUDABlock that enables graphical programming on GPU. This tool enables CUDA source code to be generated automatically and easily.
[1] Dagum, Leonardo, and Ramesh Menon. "OpenMP: an industry standard API for shared-memory programming." Computational Science & Engineering, IEEE5.1 (1998): 46-55.
[2]Lee, Seyong, Seung-Jai Min, and Rudolf Eigenmann. "OpenMP to GPGPU: a compiler framework for automatic translation and optimization." ACM Sigplan Notices 44.4 (2009): 101-110.
[3] Han, Tianyi David, and Tarek S. Abdelrahman. "hi CUDA: a high-level directive-based language for GPU programming." Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units. ACM, 2009.
[4] Lim, Young Won, Prashanth B. Bhat, and Viktor K. Prasanna. "Efficient algorithms for block-cyclic redistribution of arrays." Algorithmica 24.3-4 (1999): 298-330.
[5]Roque, Ricarose Vallarta. OpenBlocks: an extendable framework for graphical block programming systems. Diss. Massachusetts Institute of Technology, 2007.
[6]NVIDIA, “CUDA Toolkit Documentation v6.5,” http://docs.nvidia.com/cuda/index.html, Nov. 2014.
[7]Ryoo, Shane, et al. "Optimization principles and application performance evaluation of a multithreaded GPU using CUDA." Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming. ACM, 2008.
[8] Songho.ca ,"Convolution" http://www.songho.ca/dsp/convolution/convolution.html
[9] "NVIDIA, "NVIDIA GeForce GTX 760 GPU Architecture Overview,"http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-760 ,2013
[10] GASS, “jCUDA: Java for CUDA,” http://www.gass-ltd.co.il/en/
products/jcuda/, 2010.
[11]C.-K. Luk, S. Hong, and H. Kim, “Qilin: Exploiting Parallelism on Heterogeneous Multiprocessors with Adaptive Mapping,” Proc.Int’l Symp. Microarchitecture, pp. 45-55, 2009.
[12] Hotball's Hive, “CDUA Matrix multiplication”, http://www2.kimicat.com/%E7%AC%AC%E4%BA%8C%E5%80%8Bcuda%E7%A8%8B%E5%BC%8F
[13]OCAU Wiki,” CUDA Parallel Computing
” http://www.overclockers.com.au/wiki/CUDA_Parallel_Computing