Anatomy of High-Performance Many-Threaded Matrix Multiplication
Citations Over TimeTop 10% of 2014 papers
Abstract
BLIS is a new framework for rapid instantiation of the BLAS. We describe how BLIS extends the "GotoBLAS approach" to implementing matrix multiplication (GEMM). While GEMM was previously implemented as three loops around an inner kernel, BLIS exposes two additional loops within that inner kernel, casting the computation in terms of the BLIS micro-kernel so that porting G E M M becomes a matter of customizing this micro-kernel for a given architecture. We discuss how this facilitates a finer level of parallelism that greatly simplifies the multithreading of GEMM as well as additional opportunities for parallelizing multiple loops. Specifically, we show that with the advent of many-core architectures such as the IBM PowerPC A2 processor (used by Blue Gene/Q) and the Intel Xeon Phi processor, parallelizing both within and around the inner kernel, as the BLIS approach supports, is not only convenient, but also necessary for scalability. The resulting implementations deliver what we believe to be the best open source performance for these architectures, achieving both impressive performance and excellent scalability.
Related Papers
- → Toward Parallel Modeling of Solidification Based on the Generalized Finite Difference Method Using Intel Xeon Phi(2016)8 cited
- → Experience of Porting and Optimization of Seismic Modelling on Multi and Many Cores of Hybrid Computing Cluster(2015)1 cited
- U-Boot Booting Analyses and Porting on PowerPC Platform(2010)
- U-Boot Porting for SMP System Based on PowerPC(2008)
- Embedded SMP System Design Based on PowerPC(2009)