Optimal loop unrolling for GPGPU programs
Citations Over TimeTop 10% of 2010 papers
Abstract
Graphics Processing Units (GPUs) are massively parallel, many-core processors with tremendous computational power and very high memory bandwidth. With the advent of general purpose programming models such as NVIDIA's CUDA and the new standard OpenCL, general purpose programming using GPUs (GPGPU) has become very popular. However, the GPU architecture and programming model have brought along with it many new challenges and opportunities for compiler optimizations. One such classical optimization is loop unrolling. Current GPU compilers perform limited loop unrolling. In this paper, we attempt to understand the impact of loop unrolling on GPGPU programs. We develop a semi-automatic, compile-time approach for identifying optimal unroll factors for suitable loops in GPGPU programs. In addition, we propose techniques for reducing the number of unroll factors evaluated, based on the characteristics of the program being compiled and the device being compiled to. We use these techniques to evaluate the effect of loop unrolling on a range of GPGPU programs and show that we correctly identify the optimal unroll factors. The optimized versions run up to 70 percent faster than the unoptimized versions.
Related Papers
- → Fast-path loop unrolling of non-counted loops to enable subsequent compiler optimizations(2018)13 cited
- → Combining Worst-Case Timing Models, Loop Unrolling, and Static Loop Analysis for WCET Minimization(2009)20 cited
- → An Unfolding-Based Loop Optimization Technique(2003)2 cited
- → What can we gain by unfolding loops?(2004)6 cited
- → Code Transformation Impact on Compiler-based Optimization: A Case Study in the CMSSW(2021)