Functional languages

Functional languages are referentially transparent and have no state. This feature has been exploited in various ways to gain parallelism.

Speculative computation

These languages try to gain parallelism by performing computations in parallel before their results are known to be required.

  1. MultiLisp. Lisp with futures. A future is a promise to provide the value of a computation if needed; pointers to the future may be freely manipulated. Run-time system decides when futures should create new processes. An introduction to Multilisp [Hal85] and developments in MultiLisp, particularly the difficulties of implementing both futures and explicit continuations ( call/cc) [Hal90].
  2. Qlisp. Provides qlet and qlambda constructs for spawning parallel processes. User supplies predicates for determining whether these should create new processes [GG88].
  3. Mul-T. Similar to Multilisp but code is compiled rather than interpreted; has improved task management/creation [KHM89].

Graph reduction

A functional program can be represented as a graph of combinators [Tur79]. Reduction of this graph has proven to lead to efficient schemes for sequential evaluation of lazy functional programs. Proponents claim that graph reduction is inherently parallel, but there has been little in the way of efficient parallel implementation.

  1. MIMD graph reduction [Pey89].
  2. Graphinators and SIMD graph reduction [HM88].
  3. Everything you wanted to know about lazy functional languages but were afraid to ask [Pey87].

Dataflow languages

Dataflow languages are single-assignment languages, the idea being to execute them in a data-driven manner on (usually) special-purpose hardware.

  1. Motivation for the dataflow approach [Arv89].
  2. Id. A dataflow language based on speculative computation I-structures and M-structures [Nik90].
  3. Sisal. Discusses the language, the compiler, the runtime system, and performance numbers [FCO90].


Guy.Blelloch@cs.cmu.edu, July 1994