The fourth solution uses a pipeline. A pipeline is composed of a sequence of
filters, connected by streams of data. In this case there are four filters:
input, shift, alphabetize, and output. Each filter processes its data, sending
it to the downstream filter. Control is distributed: each filter can run
whenever it has data on which to compute. Data sharing between filters is
strictly limited to that transmitted on pipes [AllenGarlan92].
Dataflow Architecture
This solution has several nice properties. First, it supports the intuitive flow of processing. Second, it supports reuse, since each filter can function in isolation (provided upstream filters produce data in the form it expects). New functions are easily added to the system by inserting filters at the appropriate point in the processing sequence.
On the other hand, it has a number of drawbacks. First, it is virtually impossible to modify the design to support an interactive system. For example, in order to delete a line, there would have to be some persistent shared storage, violating a basic tenet of this approach. Second, the solution is inefficient in terms of its use of space, since each filter must copy all of the data to its output ports.
Updated Halloween 95 by
Mary Shaw
Comments to maintainer