More recently, I have been thinking about libraries to incorporate software synthesis into various programs including installations, games and embedded systems. There is a real mismatch between libraries and procedural implementation languages (i.e. C) and the world of signal processing and unit generators. This has led to the development of UGG (see below) and eventually perhaps the incorporation of UGG into an O2-based interface.
Back to Bibliography by SubjectABSTRACT:Unit generators are primary building blocks of music audio software. Unit generators aim to be both efficient and flexible, but these goals are often in opposition. As designers trade off efficiency against flexibility, many designs emerge, leading to a multitude of incompatible implementations. Thus, there are many incompatible unit generator libraries, each representing substantial effort. The present work suggests that unit generators can be written in a functional style using a conventional language with operator overloading, and an easily modifiable “back end” can generate efficient code. A prototype of this method, the Unit Generator Generator (UGG) system can be tailored quickly to target many unit generator designs. Computations can be shared across unit generators by defining simple functions, leading to an even more compact and expressive notation.[Adobe Acrobat (PDF) Version]
Dannenberg and McAvinney, “A Functional Approach to Real-Time Control,” in Proceedings of the International Computer Music Conference, Paris, France, October 19-23, 1984. San Francisco, CA: Computer Music Association, June 1985. pp. 5-16.
The first paper on Arctic. Much of this was used in the CMJ article that followed.
ABSTRACT:Traditional procedural programming languages make real-time programming difficult because the programmer must prevent the procedural nature of his programs from interfering with the real-time response that lie wants to implement.. To solve this problem, we propose a functional programming language that allows the programmer to “stand outside” the time domain, thus avoiding confusion between the sequential nature of program execution and the time-varying nature of program output. An abstract model for real-time control is presented which is based on prototypes and instances. A prototype is a higher-order function that maps a starting time and duration scale factor into a function of time, called an instance. Operations are provided to manipulate and combine prototypes to describe complex responses to system inputs. Using the model as a basis, a language for real-time control, named Arctic, has been designed. The objective of Arctic is to lower the conceptual barrier between the desired system response and the representation of that response.
[Adobe Acrobat (PDF) Version]
Dannenberg, “Arctic: A Functional Language for Real-Time Control,” in Conference Record of the 1984 ACM Symposium on LISP and Functional Programming, (August 1984), pp. 96-103.
Early work aimed at a CS audience.[Adobe Acrobat (PDF) Version]ABSTRACT: Arctic is a language for the specification and implementation of real-time control systems. Unlike more conventional languages for real-time control, which emphasize concurrency, Arctic is a stateless language in which the relationships between system inputs, outputs and intermediate terms are expressed as operations on time-varying functions. Arctic allows discrete events or conditions to invoke and modify responses asynchronously, but because programs have no state, synchronization problems are greatly simplified. Furthermore, Arctic programs are non-sequential, and the timing of system responses is notated explicitly. This eliminates the need for the programmer to be concerned with the execution sequence, which accounts for much of the difficulty in real-time programming.
Dannenberg, “Arctic: Functional Programming For Real-Time Systems,” in Proceedings of the 19th Hawaii International Conference on Systems Science, (January 1986), pp. 216-226.
Another paper on Arctic.
Dannenberg and Rubine, “Arctic: A Functional Language for Real-Time Control,” IEEE Software, (January 1986), pp. 70-71.
A short presentation on Arctic, part of a feature on "multi-paradigm languages."[Acrobat (PDF) Version]ABSTRACT: Arctic is a language for the specification and implementation of real-time control systems. Unlike more conventional languages for real-time control, which emphasize concurrency, Arctic is a stateless language in which the relationships between system inputs, outputs and intermediate terms are expressed as operations on time-varying functions. Arctic allows discrete events or conditions to invoke and modify responses asynchronously, but because programs have no state, synchronization problems are greatly simplified. Furthermore, Arctic programs are non-sequential, and the timing of system responses is notated explicitly. This eliminates the need for the programmer to be concerned with the execution sequence, which accounts for much of the difficulty in real-time programming.
Rubine and Dannenberg, “Arctic Programmer's Manual and Tutorial,” CMU Tech Report CMU-CS-87-110, 1987.
A real language definition, including a description of our non-real-time Arctic interpreter.
Dannenberg, McAvinney, and Rubine, “Arctic: A Functional Approach to Real-Time Systems,” Computer Music Journal, 10(4) (Winter 1986), pp. 67-78.
Probably the best article on Arctic.[Acrobat (PDF) Version]ABSTRACT: In the past, real-time control via digital computer has been achieved more through ad hoc techniques than through a formal theory. Languages for real-time control have emphasized concurrency, access to hardware input/output (I/O) devices, interrupts, and mechanisms for scheduling tasks, rather than taking a high-level problem-oriented approach in which implementation details are hidden. In this paper, we present an alternative approach to real-time control that enables the programmer to express the real-time response of a system in a declarative fashion rather than an imperative or procedural one.
Dannenberg, “The Canon Score Language,” Computer Music Journal, 13(1) (Spring 1989), pp. 47-56.
The only article on Canon.ABSTRACT: Canon is both a notation for musical scores and a programming language. Canon offers a combination of declarative style and a powerful abstraction capability which allows a very high-level notation for sequences of musical events and structures. Transformations are operators that can adjust common parameters such as loudness or duration. Transformations can be nested and time-varying, and their use avoids the problem of having large numbers of explicit parameters. Behavioral abstraction, the concept of making behavior an arbitrary function of the environment, is supported by Canon and extends the usefulness of transformations. A non-real-time implementation of Canon is based on Lisp and produces scores that control MIDI synthesizers.
Dannenberg and Fraley, “Fugue: Composition and Sound Synthesis with Lazy Evaluation and Behavioural Abstraction,” in 1989 International Computer Music Conference, Computer Music Association, (October 1989), pp. 76-79.
The first paper on Fugue. Not as complete as the IEEE papers.ABSTRACT. Fugue is an interactive language for music composition and synthesis. The goal of Fugue is to simplify the task of generating and manipulating sound samples while offering greater power and flexibility than other software synthesis languages. In contrast to other computer music systems, sounds in Fugue are abstract, immutable objects, and a set of functions are provided to create and manipulate these sound objects. Fugue directly supports behavioral abstraction whereby scores can be transformed using high-level abstract operations. Fugue is embedded in a Lisp environment, which provides great flexibility in manipulating scores and in performing other related symbolic processing. The semantics of Fugue are derived from Arctic and Canon which have been used for composition, research, and education at Carnegie Mellon University for several years.
Dannenberg, Fraley, and Velikonja, “Fugue: A Functional Language for Sound Synthesis,”
Computer, 24(7) 1991, pp. 36-41.
ABSTRACT: A description is given of Fugue, a language that lets composers express signal processing algorithms for sound synthesis, musical scores, and higher level musical procedures all in one language. Fugue provides functions to create and manipulate sounds as abstract, immutable objects. The interactive language supports behavioral abstraction, so composers can manage complex musical structures. Fugue's capabilities and an example of a score it generated are examined. The implementation of Fugue in a combination of C and XLisp, to run on Unix workstations, is discussed. An example of how Fugue's implementation of lazy evaluation works is given. Future extensions and applications of Fugue are indicated.
[Acrobat (PDF) Version]
Dannenberg, Fraley, and Velikonja, “Fugue: A Functional Language for Sound Synthesis,” in Readings in Computer-Generated Music, Dennis Baggi, ed., IEEE Press, 1992.
A revised version of the IEEE Computer article. The book chapter is somewhat improved, but also longer.
Dannenberg and Mercer, “Real-Time Software Synthesis on Superscalar Architectures,” in Proceedings of the 1992 International Computer Music Conference, San Jose, CA, October 1992. International Computer Music Association, October 1992. pp. 174-177.
The first article on Nyquist, which was intended to become a real-time language, hence the misleading title. Please read the Computer Music Journal articles (
see below
) for more up-to-date and complete information.
[Adobe Acrobat (PDF) Version] [Postscript Version]
ABSTRACT: Advances in processor technology will make it possible to use general-purpose personal computers as real-time signal processors. This will enable highly-integrated “all-software” systems for music processing. To this end, the performance of a present generation superscalar processor running synthesis software is measured and analyzed. A real-time reimplementation of Fugue, now called Nyquist, takes advantage of the superscalar synthesis approach, integrating symbolic and signal processing. Performance of Nyquist is compared to Csound.
Dannenberg, “The Implementation of Nyquist, A Sound Synthesis Language,” in Proceedings of the 1993 International Computer Music Conference, Tokyo, Japan, September 1993. International Computer Music Association, 1993. pp. 168-171.
The 1992 ICMC article was only given 4 pages, so we left out the implementation details. They're (partially) covered here. Please read the Computer Music Journal articles (see below) for more up-to-date and complete information.
ABSTRACT: Nyquist is a functional language for sound synthesis with an efficient implementation. It is shown how various language features lead to a rather elaborate representation for signals, consisting of a sharable linked list of sample blocks terminated by a suspended computation. The representation supports infinite sounds, allows sound computations to be instantiated dynamically, and dynamically optimizes the sound computation.
[Adobe Acrobat (PDF) Version] [Postscript Version]
Dannenberg, “Abstract Time Warping of Compound Events and Signals,” in Proceedings of the 1994 International Computer Music Conference, Aarhus and Aalborg, Denmark, September 1994. International Computer Music Association, 1994. pp. 251-254.
One of the features of Nyquist (which originated with Canon) is the ability to provide “time warp” functions. This paper gives the details. Please read the Computer Music Journal articles (see below) for more up-to-date and complete information. Dannenberg, “Machine Tongues XIX: Nyquist, a Language for Composition and Sound Synthesis,”
Computer Music Journal, 21(3) (Fall 1997), pp. 50-60.
This is the best overview of Nyquist. See also the companion articles in the same issue of CMJ.
Dannenberg, “Abstract Time Warping of Compound Events and Signals,”
Computer Music Journal, 21(3) (Fall 1997), pp. 61-70.
This article is more complete than the ICMC version described above. See also the other articles on Nyquist in this same issue of CMJ.
Dannenberg, “The Implementation of Nyquist, a Sound Synthesis Language,”
Computer Music Journal, 21(3) (Fall 1997), pp. 71-82.
This paper is more complete than the ICMC version described above. See also the other articles on Nyquist in this same issue of CMJ.
Dannenberg and Thompson, “Real-Time Software Synthesis on Superscalar Architectures,”
Computer Music Journal, 21(3) (Fall 1997), pp. 83-94.
This paper is more complete than the ICMC version described above. See also the other articles on Nyquist in this same issue of CMJ.
Dannenberg, “The Nyquist Composition Environment: Supporting Textual Programming with a Task-Oriented User Interface,” in Proceedings of the 2008 International Computer Music Conference. San Francisco: The International Computer Music Association, (August 2008).
ABSTRACT: Functions of time are often used to represent continuous parameters and the passage of musical time (tempo). A new approach generalizes previous work in three ways. First, common temporal operations of stretching and shifting are special cases of a new general time-warping operation. Second, these operations are “abstract.” Instead of operating directly on signals or events, they operate on abstract behaviors that interpret the operations at an appropriate structural level. Third, time warping can be applied to both discrete events and continuous signals.
Nyquist is an interactive language for music composition and sound synthesis. Features of Nyquist include: (1) a full interactive environment based on Lisp, (2) no distinction between the “score” and the “orchestra”, (3) support for behavioral abstraction, (4) the ability to work in terms of both actual and perceptual start and stop times, and (5) a time- and memory-efficient implementation.
ABSTRACT:Functions of time are often used to represent continuous parameters and the passage of musical time or tempo. The work described in this article generalizes previous work in three ways. First, common temporal operations of stretching and shifting are shown to be special cases of a new general time-warping operation. Second, we show a language in which these operations are "abstract." Instead of operating directly on signals or events, time warps operate on abstract behaviors that interpret warping at an appropriate structural level. Third, we show how time warping can be applied to both discrete events and continuous signals. These new generalizations are implemented in Nyquist, and we describe the implementation. The general principles presented here should apply to many composition, control, and synthesis systems.
Nyquist is an advanced functional language for sound synthesis and composition. One of the goals of Nyquist is to achieve efficiency comparable to more conventional “Music N” synthesis languages such as Csound. Efficiency can be measured in space and time, and both are important: digital audio takes enormous amounts of memory, and sound synthesis programs are computationally intensive. The efficiency requirement interacts with various language features, leading to a rather elaborate representation for signals. I will show how this representation supports Nyquist semantics in a space- and time-efficient manner. Among the features of the representation are incremental computation, dynamic storage allocation and reclamation, dynamic instantiation of new signals, representation of infinite sounds, and support for multi-channel, multi-sample-rate signals.
Advances in processor technology are making it possible to use general-purpose personal computers as real-time signal processors. This enables highly integrated “all-software” systems for real-time music processing. Much has been speculated about the behavior of software synthesizers, but there has been relatively little actual experimentation and measurement to verify or refute the “folklore” that has appeared. In the hopes of better understanding this important future technology, we have performed extensive measurements on several types of processors. We report our findings here and discuss the implications for software-synthesis systems.
Nyquist is a programming language for sound synthesis and music composition. Nyquist has evolved from a text-only programming language to include an integrated development environment (IDE) that adds graphical support for many tasks. Nyquist is also hosted by Audacity, a widely used audio editor that can invoke Nyquist functions written in the form of scripted plug-ins. This article shows by example how task-oriented interface design can augment a text-based language.
Dannenberg, “Expressing Temporal Behavior Declaratively,” CMU Computer Science, A 25th Anniversary Commemorative, Richard F. Rashid, ed., ACM Press, 1991, pp. 47-68.
An overview of this work for a CS audience. Unfortunately, the published chapter is full of machine-generated typos and Addison Wesley refused to reprint the book. Readable versions are available from CMU CSD and online through the following link: