Exploiting task and data parallelism on a multicomputer
Citations Over TimeTop 10% of 1993 papers
Abstract
For many applications, achieving good performance on a private memory parallel computer requires exploiting data parallelism as well as task parallelism. Depending on the size of the input data set and the number of nodes (i.e., processors), different tradeoffs between task and data parallelism are appropriate for a parallel system. Most existing compilers focus on only one of data parallelism and task parallelism. Therefore, to achieve the desired results, the programmer must separately program the data and task parallelism. We have taken a unified approach to exploiting both kinds of parallelism in a single framework with an existing language. This approach eases the task of programming and exposes the tradeoffs between data and task parallelism to compiler. We have implemented a parallelizing Fortran compiler for the iWarp system based on this approach. We discuss the design of our compiler, and present performance results to validate our approach.
Related Papers
- → How much parallelism is there in irregular applications?(2009)36 cited
- → Multiple cores, multiple pipes, multiple threads - do we have more parallelism than we can handle?(2005)2 cited
- → A MODEL OF SPECULATIVE PARALLELISM(1992)4 cited
- → Using Predictive Adaptive Parallelism to Address Portability and Irregularity(2006)1 cited
- → Multi-level parallelism in Phylocon algorithm(2010)