CS 15-312: Foundations of Programming Languages
(Spring 2009)

Schedule of Classes

At a glance ...


Mon 12 Jan
Lecture 1
Welcome and Course Introduction
We outline the course, its goals, and talk about various administrative issues.

Judgments, Rules, Derivations
We introduce some fundamental mechanisms to study programming languages: judgments describe potential facts in the domain of discourse, rules allow deriving new judgments from judgments that are known to hold, and derivations are full justification of judgments.
Wed 14 Jan
Lecture 2
Inductive Definitions, Hypothetical Judgments
We present a general method to prove properties of derivable judgments. We also look at derivations lacking a justification for some of judgments and reify it as the new form of hypothetical judgments. We examine some elementary properties of these judgments. Finally, we define transition systems as a special form of judgment.
Thu 15 Jan
Recitation 1
Elements of LaTeX

Mon 19 Jan
Lecture 3
Concrete and Abstract Syntax
We give a judgmental representation of strings, that allow expressing the concrete syntax of a language and show that the productions in a context-free grammar are nothing but rules in disguise. Derivations are then a representation of the intrinsic structure of a sentence and, once streamlined, yield abstract syntax trees, an efficient notation for syntactic expressions.
  • Key Concepts: Grammars, Abstract Syntax Trees, Parsing
  • Readings:
Wed 21 Jan
Lecture 4
Binding and Scope
Binding constructs are pervasive in programming (and other) languages. Because of this, it is convenient to define an infrastructure that allows to efficiently work with them. Abstract binding trees do precisely that at the level of syntax, and are just abstract syntax trees when no binders are present. General judgments are a similar abstraction at the judgment level. We identify α-conversion and substitution as fundamental operations associated with binders. Deductive systems that embed both hypothetical and general judgments form an eminently flexible representation tool for a large class of languages.
  • Key Concepts: Names and Binders, Primitive Operations on Names, General Judgments, Generalized Rules, Generalized Inductive Definitions
  • Readings:
Thu 22 Jan
Recitation 2
Substitution, General Judgments
  • Key Concepts: α-conversion, Substitution, Structural properties
  • Readings:

Mon 26 Jan
Lecture 5
Static and Dynamic Semantics
We define a simple spreadsheet-like language but, unlike spreadsheets, introduce types to classify atomic objects. Typing rules are introduced next to classify expressions: they define its static semantics. Execution rules describe how to evaluate expressions and constitute its dynamic semantics. We show several approaches to defining the dynamic semantics of a language, and compare them.
  • Key Concepts: Types, Static Semantics, Dynamic Semantics, Type-Free Execution, Transition vs. Evaluation Semantics
  • Readings:
Wed 28 Jan
Lecture 6
Type Safety
How do we know that that rules we have defined make sense? We prove it mathematically. The key results are type preservation (types provide a track from which execution can never gets off) and progress (execution always knows what to do next). We trace back the very possibility of proving these theorems to the interaction between two types of rules, introduction and elimination forms, from which we extract a general design principle. We conduct these proofs in the transition semantics and discuss issues with the evaluation semantics. We conclude by examining dynamic errors.
  • Key Concepts: Preservation and Progress Theorems, Introduction and Elimination Forms, Errors
  • Readings:
Thu 29 Jan
Recitation 3
Twelf

Mon 2 Feb
Lecture 7
Functional Core Language
We define a new language with just (non-recursive) functions and observe how the static and dynamic semantics play out. This involves the introduction of closures to handle scoping issues in an environment-based evaluation semantics.
Wed 4 Feb
Lecture 8
Recursion, Iteration, Fixed Points
Obtaining an interesting language with functions and numbers requires including some form of recursion. We show two approaches: the first, primitive recursion, includes a recursor that allows to define all and only the functions that can be obtained through a predetermined number of iterations (for-loops) which yields necessarily total functions; the second, general recursion, supports dynamically bounded iterations (while-loops) and allows possibly partial functions.
  • Key Concepts: Primitive Recursion, Gödel's System T, General Recursion, Plotkin's PCF
  • Readings:
Thu 5 Feb
Recitation 4
Products and sums (student presentation)
We now examine language that support fixed-length tupling constructs. We begin with 2-tuple (pairs) and 0-tuple (unit), extend it to generic tuples, and then define records as labeled tuples and objects as self-referential records. We then consider safe languages constructs that allow the same data structure to represent objects of possibly different types (variants). The underlying mathematical concept is that of sum. As for products, we consider, binary, nullary, n-ary and labeled sums. As concrete examples, we define the type of Booleans and options.
  • Key Concepts: Pairs, Unit Type, Tuples, Records, Objects; Binary Sums, Void Type, Labeled Sums
  • Readings:

Mon 9 Feb
Lecture 9
Recursive Types, Fixed Points
Natural numbers, lists, string have something in common: they all specify infinite objects, and each is built in the same regular way. This suggests that there is a common underlying principle that they share. This is the recursive type construction, which allows to define a type based on itself. Once this principle is exposed, we have a mechanism to define our favorite recursive types: not just the above, but also trees, objects, recursive functions, etc. In this lecture, we examine how recursive types work and how the machinery they rely upon is hidden in practical programming languages.
  • Key Concepts: Recursive Types, Type Equations, Iso-Recursive Semantics, Objects (revisited), Recursive functions (revisited).
  • Readings:
Wed 11 Feb
Lecture 10
Pattern Matching
We examine a language that offers the notion of pattern as a (mostly) generic replacement for destructors associated with individual constructs. In order to achieve a usable progress theorem, patterns shall be exhaustive. It is also useful to flag out redundant patterns, which often hide a programming error.
  • Key Concepts: Patterns, Matching, Exhaustiveness, Redundancy
  • Readings:
Thu 12 Feb
Recitation 5
Datatypes
In this lecture, we examine how ML datatypes work. We take lists as an example and show what hides behind ML's user-friendly syntax.
  • Key Concepts: ML datatypes.
  • Readings:

Mon 16 Feb
Lecture 11
Dynamic Typing
All the languages we have seen so far are typed, and there are good reasons for this. We look at two untyped languages and show how things can get nasty quickly without types. The first one is actually not too bad: the simply typed λ-calculus has just functions, is Turing-complete, but is not fun to program in. The second is an untyped version of Plotkin's PCF: we now need to check at run time that operations make sense. This essentially builds typechecking into the execution semantics, with loss in performance because we need to check that expressions are valid all the time - in particular each time we recurse.
Wed 18 Feb
Lecture 12
Type-Directed Optimization
We show how the untyped PCF can be compiled into the typed PCF extended with errors and a type representing untyped expressions. This exercise exposes the actual tagging and checking that goes on in an untyped implementation and makes sources of inefficency evident. This embedding in a typed framework also provides an opportunity to use the type system to carry out simple but effective optimizations aimed at mitigating the overhead of tag creation and checking. In many cases, this can push out tagging and checking at the functions or module boundaries. We then show that the type of untyped expressions can be simulated directly within PCF.
  • Key Concepts: Hybrid Typing, Type-Directed Optimization
  • Readings:
Thu 19 Feb
Recitation 6
Definability
In this lecture, we examine in depth how the untyped λ-calculus allows us to define sophisticated programming constructions such as numbers, products, sums and a fixed point operator. We also show how dynamic typing can be defined in terms of recursive types.
  • Key Concepts: ML datatypes.
  • Readings:

Mon 23 Feb
Lecture 13
Polymorphism and Generics
We now look at polymorphism, which allows writing universal functions that work on arguments of any type. We see that polymorphism, although it has a straightforward definition in terms of universal types, yields surprising expressive power. Finally, we examine restricted forms of polymorphism that simplifies typechecking.
  • Key Concepts: Polymorphism, Universal Types, Polymorphic Definability, Impredicativity, Prenex Fragment
  • Readings:
Wed 25 Feb
Lecture 14
Data Abstraction
Powerful module languages can hide the implementation of a function, yet providing it through a publicized interface. We trace this mechanism down to existential types, which are in a sense dual to the universal types that underly polymorphism.
  • Key Concepts: Modularity, Data Abstraction, Existential Types.
  • Readings:
Thu 26 Feb
Recitation 7
No class (CS retreat)

Mon 2 Mar
Midterm
Midterm review
Wed 4 Mar
Lecture 15
Midterm
Thu 5 Mar
Recitation 8
Propositions and Types
We examine the relationship between a language featuring solely functions (and some base type) and implicational logic. We then extend this parallel to encompass products (seen as conjunction) and sum types (mapped to conjunctions).
  • Key Concepts: Implicational logic, Products and Conjunction, Sums and Disjunction
  • Readings:

Mon 9 Mar
Lecture 16
Stack Machines
We revisit the semantics of a language and bring it closer to actual implementations. In particular, we model the pending operations in a program by pushing them onto a stack and retrieving them when their operands have been reduced to values. This stack semantics is particularly useful when we extend our language with constructs that alter the normal control flow of a language.
  • Key Concepts: Stack Machines, Soundness and Completeness,
  • Readings:
Wed 11 Mar
Lecture 17
Exceptions: Control and Data
A prime example of using a stack machine is the intoduction of exceptions, which, when raised, have the effect of unwinding the stack until the next handler is reached. The same mechanism provides a simple way to handle run-time errors.
  • Key Concepts: Exceptions, Handler Stacks, Failure
  • Readings:
Thu 12 Mar
Recitation 9
Continuations
Having made the control stack explicit in the description of the semantics of a language, it is a small step to reify it into a construct of this language. This is this very primitive that languages supporting continuations provide. We describe the formal basis of continuations and show how they are used both to carry out a computation and to recover from failure.
  • Key Concepts: Success Continuations, Failure Continuations.
  • Readings:

Mon 16 Mar
Lecture 18
Subtyping and Subsumption
It is intuitively appealing to see some types as special cases of other types, for example an integer as a real number or a 3-D point as a 2-D point with extra information. This is called subtyping: the ability of providing a value of the subtype whenever a value of the supertype is expected, and the formal mechanism that governs it is the rule of subsumption. As a side-effect, expressions do not have any more a unique type.
  • Key Concepts: Subtypes, Supertypes, Subsumption
  • Readings:
Wed 18 Mar
Lecture 19
Subtyping and Variance
A subtyping relation at the level of the arguments of a type constructor propagates to a subtyping relation at the level of the type constructor itself. The way the resulting relation goes depends on the specific constructor and on the specific argument. This is called variance: covariance if the subtyping relation at the constructor level goes in the same direction as the argument, contravariance if it flow in the opposite direction.
  • Key Concepts: Covariant arguments, Contravariant arguments
  • Readings:
Thu 19 Mar
Recitation 10
Fluid Binding
Lexical scoping enables a programmer to match variable uses with declarations exactly (in a let for example). It is sometimes desirable to allow a variable to assume different values as the execution proceeds, breaking the correspondence between uses and declarations. A sound way to incorporate this feature in a language is through the concept of fluid binding, which separate the lexical scope of a symbol from the dynamic extent of binding to values.
  • Key Concepts: Scope, Extent, Fluid Bindings
  • Readings:

Mon 23 Mar
No class (Spring Break)
Thu 25 Mar
Wed 26 Mar

Mon 30 Mar
Lecture 20
Storage Effects
So far, repeated evaluations of the same expression always produce the same value: expressions are self-contained and the result depends only on the text of the expression. A common departure from this model is found in languages featuring a memory and operations to access and manipulate it. The basic building block is the reference cell, which significantly alters the model examined so far, to the point of allowing simulating recursion.
  • Key Concepts: Memory, References, Imperative Programming, Backpatching
  • Readings:
Wed 1 Apr.
Lecture 21
Monadic Storage Effects
The standard treatment of storage effects, discussed in the last lecture, is driven by the evaluation semantics, in the sense that it is impossible to know whether an expression will be effectful by looking at its type alone. Another approach is to mark the possibility of an effect in the type of expressions. This is achieved using a typing device called a monad.
  • Key Concepts: Monad, Computations, Explicit Effect
  • Readings:
Thu 2 Apr.
Recitation 11
Monadic Storage Effects
The standard treatment of storage effects, discussed in the last lecture, is driven by the evaluation semantics, in the sense that it is impossible to know whether an expression will be effectful by looking at its type alone. Another approach is to mark the possibility of an effect in the type of expressions. This is achieved using a typing device called a monad.
  • Key Concepts: Monad, Computations, Explicit Effect
  • Readings:

Mon 6 Apr.
Lecture 22
Eagerness and Laziness
Up to this point, the distinction between an eager and a lazy construct has been a matter of how the evaluation rules were chosen to be define. Another approach is to lift this distinction to the level of types, so that the programmer can choose the interpretation based on the type of the particular construct he/she decides to use. Yet another approach is to design a type for lazy objects, which can be used within any constructor. This latter approach is called a suspension.
  • Key Concepts: Lazy Types, Lazy Computation, Suspensions
  • Readings:
Wed 8 Apr.
Lecture 23
Call-by-Need
An eager semantics evaluates an argument exactly one (even if it is never used). A lazy semantics partially evaluates it each time it is encountered. In each case, there is the risk of doing more work than strictly necessary. Call-by-need optimizes this process by evaluating an argument the first time it is encountered but remembering the obtained value for future uses.
  • Key Concepts: Call-by-Need, Memoization, On-Demand Computation
  • Readings:
Thu 9 Apr.
Recitation 12
Speculative Parallelism and Futures
The mechanism supporting extreme laziness in a call-by-need semantics is readily adapted to enable extreme eagerness in a setting supporting parallel executions: rather than waiting until the value of an argument is needed, the idea is to evaluate it speculatively, in parallel with other such evaluations, so that it is there the first time it is needed. When applied to suspensions, this idea is known as futures.
  • Key Concepts: Parallelism, Speculative Execution, Futures
  • Readings:

Mon 13 Apr.
Lecture 24
Work-Efficient Parallelism
Effect-free programming languages provide plenty of opportunities for the parallel evaluation of subcomputations. Specifically, unless an evaluation depends on the result of another, they can be executed in parallel. Dependencies and theorical execution time can be measured by equipping the evaluation semantics with an abstract notion of cost, which is then used to calculate useful figures such as the number of steps in a purely sequential setup (work) and the time in a maximally parallel environment (depth).
  • Key Concepts: Parallel vs. Sequential Evaluation, Control Dependency, Cost Semantics, Work and Depth
  • Readings:
Wed 15 Apr.
Lecture 25
Process Calculi
Allowing actions to send or receive data promotes synchronization to a communication mechanism. This becomes particularly powerful when communication channels are themselves seen as data and a mechanism is provided to create them on the fly: this gives rise to the notion of private channel. The semantics of communicating processes is open-ended in the sense that several alternative models coexists. Among them is the decision on whether a sending process should wait till its message has been received, or proceed with its computation.
  • Key Concepts: Private Channels, Scope Extrusion, Synchronous Communication, Asynchronous Communication
  • Readings:
Thu 16 Apr.
Recitation 13
Concurrency
Expressions so far have been self-contained: they were evaluated from an initial state to a final value without interaction with the external world. The possibility of interaction gives rise to the notion of a process, with synchronization actions causing run-time events. A natural next step consists in considering the interactions among several processes (one of them possibly modeling the external world), hence capturing distributed system.
  • Key Concepts: Actions and Events, Structural Congruences, Concurrent Interaction, Replication
  • Readings:

Mon 20 Apr.
Lecture 26
Coroutines
We discuss coroutines as a specific form of concurrency. Coroutines pass control among each other in a programmatic way, possibly exchanging values as they do so.
  • Key Concepts: Coroutines
  • Readings:
Wed 22 Apr.
Lecture 27
Modularity and Linking
This lecture examines the basics of the separate compilation of modules or libraries from a program that uses them. It defines an abstract linker.
  • Key Concepts: Separate Compilation, Linking, Basic Modules, Parameterized Modules
  • Readings:
Thu 23 Apr.
Recitation 14
Final review

Mon 27 Apr
10am-1pm
(1030)
Final
Final

Iliano Cervesato