A GSA-based compiler infrastructure to extract parallelism from complex loops
Citations Over TimeTop 24% of 2003 papers
Abstract
This paper presents a new approach for the detection of coarse-grain parallelism in loop nests that contain complex computations, including subscripted subscripts as well as conditional statements that introduce complex control flows at run-time. The approach is based on the recognition of the computational kernels calculated in a loop without considering the semantics of the code. The detection is carried out on top of the Gated Single Assignment (GSA) program representation at two different levels. First, the use-def chains between the statements that compose the strongly connected components (SCCs) of the GSA use-def chain graph are analyzed (intra-SCC analysis). As a result, the kernel computed in each SCC is recognized. Second, the use-def chains between statements of different SCCs are examined (inter-SCC analysis). This second abstraction level enables the detection of more complex computational kernels by the compiler. A prototype was implemented using the infrastructure provided by the Polaris compiler. Experimental results that show the effectiveness of our approach for the detection of coarse-grain parallelism in a suite of real codes are presented.
Related Papers
- → Relating data-parallelism and (and-) parallelism in logic programs(1996)11 cited
- → Data Parallelism, Control Parallelism, and Related Issues(2000)1 cited
- → A MODEL OF SPECULATIVE PARALLELISM(1992)4 cited
- → Multi-level parallelism in Phylocon algorithm(2010)
- → On the Concepts of Parallelism in Biomolecular Computing(2018)