next up previous
Next: References Up: Parallel Algorithms Previous: Parallel Complexity Theory

Current and Future Directions

Recently the emphasis of research on parallel algorithms has shifted to pragmatic issues. The theoretical work on algorithms has been complemented by extensive experimentation. This experimental work has yielded insights into how to build parallel machines [1], how to make parallel algorithms perform well in practice [8], how to model parallel machines more accurately, and how to express parallel algorithms in parallel programming languages.

Two effective parallel programming paradigms have emerged: control-parallel programming and data-parallel programming. In a control-parallel program, multiple independent processes or functions may execute simultaneously on different processors and communicate with each other. Some of the most successful control-parallel programming systems include Linda, MPI, and PVM. In each step of a data-parallel program an operation is performed in parallel across a set of data. Successful data-parallel programming languages include *Lisp, NESL, and HPF. Although the data-parallel programming paradigm might appear to be less general than the control-parallel paradigm, most parallel algorithms found in the literature can be expressed more naturally using data-parallel constructs.

There has also been a focus on solving problems from applied domains, including computational biology, astronomy, seismology, fluid dynamics, scientific visualization, computer-aided design, and database management. Interesting algorithmic problems arising from these domains include generating meshes for finite element analysis, solving sparse linear systems, solving n-body problems, pattern matching, ray tracing, and many others.

Commodity personal computers with multiple processors have begun to appear on the market. As this trend continues, we expect the use of parallel algorithms to increase dramatically.



Guy Blelloch, blelloch@cs.cmu.edu