Productive Programming of GPU Clusters with OmpSs
Citations Over TimeTop 1% of 2012 papers
Abstract
Clusters of GPUs are emerging as a new computational scenario. Programming them requires the use of hybrid models that increase the complexity of the applications, reducing the productivity of programmers. We present the implementation of OmpSs for clusters of GPUs, which supports asynchrony and heterogeneity for task parallelism. It is based on annotating a serial application with directives that are translated by the compiler. With it, the same program that runs sequentially in a node with a single GPU can run in parallel in multiple GPUs either local (single node) or remote (cluster of GPUs). Besides performing a task-based parallelization, the runtime system moves the data as needed between the different nodes and GPUs minimizing the impact of communication by using affinity scheduling, caching, and by overlapping communication with the computational task. We show several applications programmed with OmpSs and their performance with multiple GPUs in a local node and in remote nodes. The results show good tradeoff between performance and effort from the programmer.
Related Papers
- → CUDA-Lite: Reducing GPU Programming Complexity(2008)242 cited
- → Accelerating linpack with CUDA on heterogenous clusters(2009)140 cited
- → Multi-level parallelism for incompressible flow computations on GPU clusters(2012)63 cited
- → Scalability of Self-organizing Maps on a GPU cluster using OpenCL and CUDA(2012)34 cited
- → Study on GPU-accelerated extraction of interconnects parasitic using CUDA and MPI(2010)