Adaptive heterogeneous scheduling for integrated GPUs
Citations Over TimeTop 1% of 2014 papers
Abstract
Many processors today integrate a CPU and GPU on the same die, which allows them to share resources like physical memory and lowers the cost of CPU-GPU communication. As a consequence, programmers can effectively utilize both the CPU and GPU to execute a single application. This paper presents novel adaptive scheduling techniques for integrated CPU-GPU processors. We present two online profiling-based scheduling algorithms: naïve and asymmetric. Our asymmetric scheduling algorithm uses low-overhead online profiling to automatically partition the work of data-parallel kernels between the CPU and GPU without input from application developers. It does profiling on the CPU and GPU in a way that it doesn't penalize GPU-centric workloads that run significantly faster on the GPU. It adapts to application characteristics by addressing: 1) load imbalance via irregularity caused by, e.g., data-dependent control flow, 2) different amounts of work on each kernel call, and 3) multiple kernels with different characteristics. Unlike many existing approaches primarily targeting NVIDIA discrete GPUs, our scheduling algorithm does not require offline processing.
Related Papers
- Xen 가상화 환경에서 관리 도메인의 오버헤드를 고려한 CPU 할당 방법(2012)
- → NovAtel's novel approach to CPU usage measurement(1991)
- → A Virtual Machine Allocation Scheme based on CPU Utilization in Cloud Computing(2011)
- CPU Monitoring System Based on Xen Environment(2012)
- 유효 작업부하를 이용한 Energy aware scheduler 와 DVFS 기법(2016)