Next: How this thesis addresses
Up: Introduction
Previous: Introduction
Compared to sequential computers,
parallel machines offer substantial performance advantages
at a relatively low cost increase.
Moreover, tools for parallel programming are available
and are becoming rapidly more sophisticated:
Graphical user interfaces support program construction
by interconnecting processes diagrammatically [Lob92].
Compilers and profilers determine potential for parallelism
and help with the parallelization of existing code [BFG93,Lev93].
Debuggers use graphical ways to track and display
the state of any process or thread [Pan93].
However, despite their appealing performance-to-cost ratio and
the increasing availability of tools, parallel computers still
fail to have the expected impact on mainstream computing.
A close look at the state of the art in parallel computing
suggests at least two reasons:
- Parallel programming is inherently complex.
Compared to sequential programming the programmer
additionally must deal with, for instance,
interference, race conditions, process creation
and termination, shared resources and consistency,
synchronization and deadlock.
Successful treatment of these issues requires,
knowledge about, for instance, the location, interconnection,
and relative speeds of processors, and the location of and access
to data.
Dijkstra has pointed out the competing forces governing
the representation of an algorithm in form of
a sequential program.
``On the one hand I knew that programs could have a compelling
and deep logical beauty, on the other hand I was forced to admit
that most programs are presented in a way fit for mechanical
execution but, even if of any beauty at all, totally unfit
for human appreciation.'' [Dij76, page xiii,]
This tension is magnified in concurrent programming.
In parallel programs,
efficiency in terms of explicit, finer-grained parallelism
seems to exclude robustness, maintainability, and verifiability.
- The paradigms and patterns of program execution
for various parallel architectures differ substantially.
This lack of commonality makes parallel programming
very architecture dependent.
Consequently, it is hard to move a program from one
architecture to another.
Even if the programming environment seem similar,
the underlying communication and synchronization
mechanisms are often very different.
Typically, a program must be substantially modified
to take full advantage of, or even to execute on,
a different architecture.
Moreover, the development of reliable, widely-applicable
performance models is difficult. The change in performance
caused by porting a program may be very hard to predict.
In short, parallel programs typically are not portable.
The loss of portability in turn limits the expected
lifetime of parallel implementations and
their economic viability.
In short, parallel programming to date still is a
complex, difficult endeavour that results in
efficient, yet very specialized and short-lived programs.
A lot of research effort has been invested into addressing this
situation and coming up with suitable models for concurrent computation.
The research community seems to agree that such
a model should feature the following desirable properties
[ST95]:
- To be tractable, the model should allow the user to view a system
at different levels of abstraction.
Details of, for instance, the decomposition of a
task into parallel threads, of the communication and
synchronization between threads, and of different
evaluation strategies for non-atomic expressions
should be revealed only on demand.
- The model should be complemented by a
software development methodology,
that is, it should allow programs to
be developed from specifications in a structured, disciplined
fashion. The complexity of parallel programming
calls for a strong programming methodology.
A development calculus that yields programs that are
correct by construction is especially attractive, because
it obviates the need for costly and difficult debugging,
it documents the development and brings out the design decisions.
Maintenance is facilitated and portability is improved.
- The model should be architecture-independent, that is,
it should abstract from architecture specific features.
Making an existing program suitable for efficient execution
on a different machine should not require major modification.
- The model should possess cost measures. These measures should
allow comparing the cost of design decisions.
It should be possible to determine the cost of a program
during the early stages of the development where
certain parts are still underspecified and abstract.
- Finally, the model should be efficiently implementable on a
variety of architectures.
Of course, there is a certain amount of tension between these properties
and finding the best balance becomes important.
Next: How this thesis addresses
Up: Introduction
Previous: Introduction
Juergen Dingel
Fri Aug 6 12:11:51 EDT 1999