Load Value Approximation
Citations Over TimeTop 1% of 2014 papers
Abstract
Approximate computing explores opportunities that emerge when applications can tolerate error or inexactness. These applications, which range from multimedia processing to machine learning, operate on inherently noisy and imprecise data. We can trade-off some loss in output value integrity for improved processor performance and energy-efficiency. As memory accesses consume substantial latency and energy, we explore load value approximation, a micro architectural technique to learn value patterns and generate approximations for the data. The processor uses these approximate data values to continue executing without incurring the high cost of accessing memory, removing load instructions from the critical path. Load value approximation can also inhibit approximated loads from accessing memory, resulting in energy savings. On a range of PARSEC workloads, we observe up to 28.6% speedup (8.5% on average) and 44.1% energy savings (12.6% on average), while maintaining low output error. By exploiting the approximate nature of applications, we draw closer to the ideal latency and energy of accessing memory.
Related Papers
- → Applying Parallel Design Techniques to Template Matching with GPUs(2011)14 cited
- Parallel Implementation and Optimization of Two Basic Geo-Spatial-Analysis Algorithms Based on OpenMP(2013)
- → Load balancing for extrapolation methods on distributed memory multiprocessors(1994)7 cited
- → OpenMP and Compilation Issue in Embedded Applications(2003)5 cited
- → Improving Locality in the Parallelization of Doacross Loops(2002)1 cited