Abstract
We present a memory model to analyze and improve the performance of scientific algorithms on graphics processing units (GPUs). Our memory model is based on texturing hardware, which uses a 2D block-based array representation to perform the underlying computations. We incorporate many characteristics of GPU architectures including smaller cache sizes, 2D block representations, and use the 3C's model to analyze the cache misses. Moreover. we present techniques to improve the performance of nested loops on GPUs. In order to demonstrate the effectiveness of our model, we highlight its performance on three memory-intensive scientific applications - sorting, fast Fourier transform and dense matrix-multiplication. In practice, our cache-efficient algorithms for these applications are able to achieve memory throughput of 30-50 GB/s on a NVIDIA 7900 GTX GPU. We also compare our results with prior GPU-based and CPU-based implementations on high-end processors. In practice, we are able to achieve 2-5 x performance improvement.
Abstract
Graphics processors are designed to perform many floating-point operations per second. Consequently, they are an attractive architecture for high-performance com-puting at a low cost. Nevertheless, it is still not very clear how to exploit all their potential for general-purpose appli-cations. In this work we present a comprehensive study of the per-formance of an application executing on the GPU. In addition, we analyze the possibility of using the graphics card to extend the life-time of a computer system. In our experiments we compare the execution on a mid- class GPU (NVIDIA GeForce FX 5700LE) with a high-end CPU (Pentium 4 3.2GHz). The results show that to achieve high speedup with the GPU you need to: (1) format the vectors into two-dimensional arrays; (2) process large data arrays; and (3) perform a considerable amount of operations per data element. Finally, we study the performance when upgrading a low-end system by simply adding a GPU. This solution is cheaper, results in smaller power consumption and achieves higher speedup (8.1x versus 1.3x) than a full upgrade to a new high-end system.
Abstract
The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware a compelling platform for computationally demanding tasks in a wide variety of application domains. In this report, we describe, summarize, and analyze the latest research in mapping general-purpose computation to graphics hardware. We begin with the technical motivations that underlie general-purpose computation on graphics processors (GPGPU) and describe the hardware and software developments that have led to the recent interest in this field. We then aim the main body of this report at two separate audiences. First, we describe the techniques used in mapping general-purpose computation to graphics hardware. We believe these techniques will be generally useful for researchers who plan to develop the next generation of GPGPU algorithms and techniques. Second, we survey and categorize the latest developments in general-purpose application development on graphics hardware.
Abstract
In this paper, we present Brook for GPUs, a system for general-purpose computation on programmable graphics hardware. Brook extends C to include simple data-parallel constructs, enabling the use of the GPU as a streaming coprocessor. We present a compiler and runtime system that abstracts and virtualizes many aspects of graphics hardware. In addition, we present an analysis of the effectiveness of the GPU as a compute engine compared to the CPU, to determine when the GPU can outperform the CPU for a particular algorithm. We evaluate our system with five applications, the SAXPY and SGEMV BLAS operators, image segmentation, FFT, and ray tracing. For these applications, we demonstrate that our Brook implementations perform comparably to hand-written GPU code and up to seven times faster than their CPU counterparts.
Abstract
Graphics processors are designed to perform many floating-point operations per second. Consequently, they are an attractive architecture for high-performance com-puting at a low cost. Nevertheless, it is still not very clear how to exploit all their potential for general-purpose appli-cations. In this work we present a comprehensive study of the per-formance of an application executing on the GPU. In addition, we analyze the possibility of using the graphics card to extend the life-time of a computer system. In our experiments we compare the execution on a mid- class GPU (NVIDIA GeForce FX 5700LE) with a high-end CPU (Pentium 4 3.2GHz). The results show that to achieve high speedup with the GPU you need to: (1) format the vectors into two-dimensional arrays; (2) process large data arrays; and (3) perform a considerable amount of operations per data element. Finally, we study the performance when upgrading a low-end system by simply adding a GPU. This solution is cheaper, results in smaller power consumption and achieves higher speedup (8.1x versus 1.3x) than a full upgrade to a new high-end system.