CS 15-312: Foundations of Programming Languages
(Spring 2008) |
|
|
|
|
|
|
|
At a glance ... |
|
Mon 14 Jan.
Lecture 1
|
Welcome and Course Introduction
We outline the course, its goals, and talk about various administrative
issues.
|
Tue 15 Jan.
Recitation 1
|
Judgments, Rules, Derivations
|
Wed 17 Jan.
Lecture 2
|
Inductive Definitions, Hypothetical Judgments
We present a general method to prove properties of derivable judgments.
We also look at derivations lacking a justification for some of judgments
and reify it as the new form of hypothetical judgments. We examine some
elementary properties of these judgments. Finally, we define transition
systems as a special form of judgment.
|
Mon 21 Jan.
Lecture 3
|
Concrete and Abstract Syntax
We give a judgmental representation of strings, that allow
expressing the concrete syntax of a language and show that the
productions in a context-free grammar are nothing but rules in
disguise. Derivations are then a representation of the intrinsic
structure of a sentence and, once streamlined, yield abstract
syntax trees, an efficient notation for syntactic
expressions.
|
Tue 22 Jan.
Recitation 2
|
Substitution, General Judgments
|
Wed 23 Jan.
Lecture 4
|
Binding and Scope
Binding constructs are pervasive in programming (and other) languages.
Because of this, it is convenient to define an infrastructure that allows
to efficiently work with them. Abstract binding trees do precisely that
at the level of syntax, and are just abstract syntax trees when no binders
are present. General judgments are a similar abstraction at the judgment
level. We identify α-conversion and substitution as fundamental
operations associated with binders. Deductive systems that embed both
hypothetical and general judgments form an eminently flexible
representation tool for a large class of languages.
|
Mon 4 Feb.
Lecture 7
|
Functional Core Language
We define a new language with just (non-recursive) functions and observe
how the static and dynamic semantics play out. This involves the
introduction of closures to handle scoping issues in an environment-based
evaluation semantics.
|
Tue 5 Feb.
Recitation 4
|
Twelf
|
Wed 6 Feb.
Lecture 8
|
Recursion, Iteration, Fixed Points
Obtaining an interesting language with functions and numbers requires
including some form of recursion. We show two approaches: the first,
primitive recursion, includes a recursor that allows to define all and
only the functions that can be obtained through a predetermined number of
iterations (for-loops) which yields necessarily total functions; the
second, general recursion, supports dynamically bounded iterations
(while-loops) and allows possibly partial functions.
|
Mon 11 Feb.
Lecture 9
|
Products and sums
We now examine language that support fixed-length tupling constructs. We
begin with 2-tuple (pairs) and 0-tuple (unit), extend it to generic
tuples, and then define records as labeled tuples and objects as
self-referential records.
We then consider safe languages constructs that allow the same data
structure to represent objects of possibly different types (variants).
The underlying mathematical concept is that of sum. As for products, we
consider, binary, nullary, n-ary and labeled sums. As concrete examples,
we define the type of Booleans and options.
|
Tue 12 Feb.
Recitation 5
|
Workshop Test Anxiety, by Jumana Abdi |
Wed 13 Feb.
Lecture 10
|
Recursive Types, Fixed Points
Natural numbers, lists, string have something in common: they all specify
infinite objects, and each is built in the same regular way. This
suggests that there is a common underlying principle that they share.
This is the recursive type construction, which allows to define a type
based on itself. Once this principle is exposed, we have a mechanism to
define our favorite recursive types: not just the above, but also trees,
objects, recursive functions, etc. In this lecture, we examine how
recursive types work and how the machinery they rely upon is hidden in
practical programming languages.
|
Mon 18 Feb.
Lecture 11
|
Dynamic Typing
All the languages we have seen so far are typed, and there are good
reasons for this. We look at two untyped languages and show how things
can get nasty quickly without types. The first one is actually not too
bad: the simply typed λ-calculus has just functions, is
Turing-complete, but is not fun to program in. The second is an untyped
version of Plotkin's PCF: we now need to check at run time that operations
make sense. This essentially builds typechecking into the execution
semantics, with loss in performance because we need to check that
expressions are valid all the time - in particular each time we recurse.
|
Tue 19 Feb.
Recitation 6
|
Datatypes
In this lecture, we examine how ML datatypes work. We take lists as an
example and show what hides behind ML's user-friendly syntax.
|
Wed 20 Feb.
Lecture 12
|
Type-Directed Optimization
We show how the untyped PCF can be compiled into the typed PCF extended
with errors and a type representing untyped expressions. This exercise
exposes the actual tagging and checking that goes on in an untyped
implementation and makes sources of inefficency evident. This embedding
in a typed framework also provides an opportunity to use the type system
to carry out simple but effective optimizations aimed at mitigating the
overhead of tag creation and checking. In many cases, this can push out
tagging and checking at the functions or module boundaries. We then show
that the type of untyped expressions can be simulated directly within PCF.
|
Mon 10 Mar.
Lecture 16
|
Exceptions: Control and Data
A prime example of using a stack machine is the intoduction of exceptions,
which, when raised, have the effect of unwinding the stack until the next
handler is reached. The same mechanism provides a simple way to handle
run-time errors.
|
Tue 11 Mar.
Recitation 9
|
Discussion of the Midterm
|
Wed 12 Mar.
Lecture 17
|
Continuations
Having made the control stack explicit in the description of the semantics
of a language, it is a small step to reify it into a construct of this
language. This is this very primitive that languages supporting
continuations provide. We describe the formal basis of continuations and
show how they are used both to carry out a computation and to recover from
failure.
|
Mon 24 Mar.
|
No class (Spring Break)
|
Tue 25 Mar.
|
|
Wed 26 Mar.
|
Mon 31 Mar.
Lecture 20
|
Subtyping and Variance
A subtyping relation at the level of the arguments of a type constructor
propagates to a subtyping relation at the level of the type constructor
itself. The way the resulting relation goes depends on the specific
constructor and on the specific argument. This is called variance:
covariance if the subtyping relation at the constructor level goes in the
same direction as the argument, contravariance if it flow in the opposite
direction.
|
Tue 1 Apr.
Recitation 11
|
Varieties of Subtyping
Subtyping on base types, such as between integers and reals, is generally
achieved by means of coercions that mediate between the different internal
representations. A natural subtyping relation arises within product and
sum types by omitting (or adding) components. More tricky is the
subtyping relation endowed on recursive types.
|
Wed 2 Apr.
Lecture 21
|
Storage Effects
So far, repeated evaluations of the same expression always produce the
same value: expressions are self-contained and the result depends only on
the text of the expression. A common departure from this model is found
in languages featuring a memory and operations to access and manipulate
it. The basic building block is the reference cell, which significantly
alters the model examined so far, to the point of allowing simulating
recursion.
|
Mon 7 Apr.
Lecture 22
|
Monadic Storage Effects
The standard treatment of storage effects, discussed in the last
lecture, is driven by the evaluation semantics, in the sense that it is
impossible to know whether an expression will be effectful by looking at
its type alone. Another approach is to mark the possibility of an effect
in the type of expressions. This is achieved using a typing device called
a monad.
|
Tue 8 Apr.
Recitation 12
|
Eagerness and Laziness
Up to this point, the distinction between an eager and a lazy construct
has been a matter of how the evaluation rules were chosen to be define.
Another approach is to lift this distinction to the level of types, so
that the programmer can choose the interpretation based on the type of the
particular construct he/she decides to use. Yet another approach is to
design a type for lazy objects, which can be used within any constructor.
This latter approach is called a suspension.
|
Wed 9 Apr.
Lecture 23
|
Call-by-Need
An eager semantics evaluates an argument exactly one (even if it is never
used). A lazy semantics partially evaluates it each time it is
encountered. In each case, there is the risk of doing more work than
strictly necessary. Call-by-need optimizes this process by evaluating an
argument the first time it is encountered but remembering the obtained
value for future uses.
|
Mon 14 Apr.
Lecture 24
|
Speculative Parallelism
The mechanism supporting extreme laziness in a call-by-need semantics is
readily adapted to enable extreme eagerness in a setting supporting
parallel executions: rather than waiting until the value of an argument is
needed, the idea is to evaluate it speculatively, in parallel with other
such evaluations, so that it is there the first time it is needed.
|
Tue 15 Apr.
Recitation 13
|
Work-Efficient Parallelism
Effect-free programming languages provide plenty of opportunities for
the parallel evaluation of subcomputations. Specifically, unless an
evaluation depends on the result of another, they can be executed in
parallel. Dependencies and theorical execution time can be measured by
equipping the evaluation semantics with an abstract notion of cost, which
is then used to calculate useful figures such as the number of steps in a
purely sequential setup (work) and the time in a maximally parallel
environment (depth).
|
Wed 16 Apr.
Lecture 25
|
Resource-Bound Parallelism
In practice, the number of parallel evaluations is limited by the number
of available processing units. The expected execution time depends on the
underlying architecture, but can be estimated accurately using Brent's
Theorem in common instances.
|
Mon 21 Apr.
Lecture 26
|
Concurrency
Expressions so far have been self-contained: they were evaluated from an
initial state to a final value without interaction with the external
world. The possibility of interaction gives rise to the notion of a
process, with synchronization actions causing run-time events. A natural
next step consists in considering the interactions among several processes
(one of them possibly modeling the external world), hence capturing
distributed system.
|
Tue 22 Apr.
Recitation 14
|
Process Calculi
Allowing actions to send or receive data promotes synchronization to a
communication mechanism. This becomes particularly powerful when
communication channels are themselves seen as data and a mechanism is
provided to create them on the fly: this gives rise to the notion of
private channel. The semantics of communicating processes is open-ended
in the sense that several alternative models coexists. Among them is the
decision on whether a sending process should wait till its message has
been received, or proceed with its computation.
|
Wed 23 Apr.
Review
|
Final review
|
Mon 05 May
9:00-12:00 (C008) Final
|
Final
|
Iliano Cervesato |