Toward a Generic Hybrid CPU-GPU Parallelization of Divide-and-Conquer Algorithms
Citations Over TimeTop 16% of 2013 papers
Abstract
The increasing power and decreasing cost of Graphic Processing Units (GPUs) together with the development of programming languages for General Purpose Computing on GPUs (GPGPU) have led to the development and implementation of fast parallel algorithms for this architecture for a large spectrum of applications. Given the streaming-processing characteristics of GPUs, most practical applications so far are on highly data-parallel algorithms. Many problems, however, allow for task-parallel solutions or a combination of task and data-parallel algorithms. For these, a hybrid CPU-GPU parallel algorithm that combines the highly parallel stream-processing power of GPUs with the higher scalar power of multi-cores is likely to be superior. In this paper we describe a generic translation of any recursive sequential implementation of a divide-and-conquer algorithm into an implementation that benefits from running in parallel in both multi-cores and GPUs. This translation is generic in the sense that it requires little knowledge of the particular algorithm. We then present a schedule and work division scheme that adapts to the characteristics of each algorithm and the underlying architecture, efficiently balancing the workload between GPU and CPU. Our experiments show a 4.5x speedup over a single core recursive implementation, while demonstrating the accuracy and practicality of the approach.
Related Papers
- → Parallel connected-component labeling algorithm for GPGPU applications(2010)14 cited
- Parallel Programming For High-Performance Computing on CUDA(2009)
- CUDA-NP: Realizing Nested Thread-Level Parallelism in GPGPU Applications(2015)
- Introductory on GPGPU Programming Technique(2010)
- → Новітні архітектури відеоадаптерів. Технологія GPGPU. Частина 2(2013)