Next: Parallel Complexity Theory
Up: Parallel Algorithms
Previous: Parallel Models of Computation
A major advance in parallel algorithms has been the identification of
fundamental algorithmic techniques. Some of these techniques are also
used by sequential algorithms, but play a more prominent role in
parallel algorithms, while others are unique to parallelism. Here we
list some of these techniques with a brief description of each.
-
Divide-and-Conquer. Divide-and-conquer is a natural paradigm
for parallel algorithms. After dividing a problem into two or more
subproblems, the subproblems can be solved in parallel. Typically the
subproblems are solved recursively and thus the next divide step
yields even more subproblems to be solved in parallel. For example,
suppose we want to compute the convex-hull of a set of n points in
the plane (i.e., compute the smallest convex polygon that encloses all
of the points). This can be implemented by splitting the points into
the leftmost
and rightmost
, recursively finding the convex
hull of each set in parallel, and then merging the two resulting
hulls. Divide-and-conquer has proven to be one of the most powerful
techniques for solving problems in parallel with applications ranging
from linear systems to computer graphics and from factoring large
numbers to n-body simulations.
-
Randomization. The use of random numbers is ubiquitous in
parallel algorithms. Intuitively, randomness is helpful because it
allows processors to make local decisions which, with high
probability, add up to good global decisions. For example, suppose we
want to sort a collection of integer keys. This can be accomplished
by partitioning the keys into buckets then sorting within each bucket.
For this to work well, the buckets must represent non-overlapping
intervals of integer values, and contain approximately the same number
of keys. Randomization is used to determine the boundaries of the
intervals. First each processor selects a random sample of its
keys. Next all of the selected keys are sorted together. Finally
these keys are used as the boundaries. Such random sampling is also
used in many parallel computational geometry, graph, and string
matching algorithms. Other uses of randomization include symmetry
breaking, load balancing, and routing algorithms.
-
Parallel Pointer Manipulations. Many of the traditional
sequential techniques for manipulating lists, trees, and graphs do not
translate easily into parallel techniques. For example techniques
such as traversing the elements of a linked list, visiting the nodes
of a tree in postorder, or performing a depth-first traversal of a
graph appear to be inherently sequential. Fortunately, each of these
techniques can be replaced by efficient parallel techniques. These
parallel techniques include pointer jumping, the Euler-tour technique,
ear decomposition, and graph contraction. For example, one way to
label each node of an n-node list (or tree) with the label of the
last node (or root) is to use pointer jumping. In each
pointer-jumping step each node in parallel replaces its pointer with
that of its successor (or parent). After at most
steps,
every node points to the same node, the end of the list (or root of
the tree).
-
Others. Other useful techniques include finding small graph
separators for partitioning data among processors to reduce
communication, hashing for balancing load across processors and
mapping addresses to memory, and iterative techniques as a replacement
for direct methods for solving linear systems.
These techniques have led to efficient parallel algorithms in most
problem areas for which efficient sequential algorithms are known.
In fact, some of the techniques originally developed for parallel
algorithms have led to improvements in sequential algorithms.
Next: Parallel Complexity Theory
Up: Parallel Algorithms
Previous: Parallel Models of Computation
Guy Blelloch, blelloch@cs.cmu.edu