CS 15-212: Principles of Programming
(Spring 2008) |
|
|
|
|
|
|
|
At a glance ... |
|
Mon 14 Jan.
Lecture 1
|
Welcome and Course Introduction
Evaluation and Typing
We outline the course, its goals, and talk about various administrative
issues. We also introduce the language ML which is used throughout the
course
|
Tue 15 Jan.
Recitation 1
|
SML, Style
|
Wed 17 Jan.
Lecture 2
|
Declarations, Binding, Scope, and Functions
We introduce declarations which evaluate to environments. An environment
collects a set of bindings of variables to values which can be used in
subsequent declarations or expressions. We also discuss the rules of scope
which explain how references to identifiers are resolved. This is somewhat
tricky for recursive function declarations.
|
Sun 20 Jan.
Recitation 1.5
|
Using and Editing a Wiki
|
Mon 21 Jan.
Lecture 3
|
Recursion and Induction
We review the methods of mathematical and complete induction and show how
they can be applied to prove the correctness of ML functions. Key is an
understanding of the operational semantics of ML.
Induction can be a difficult proof technique to apply, since we often need
to generalize the theorem we want to prove, before the proof by induction
goes through. Sometimes, this requires considerable ingenuity.
We also introduce clausal function definitions based on pattern matching.
|
Tue 22 Jan.
Recitation 2
|
Scoping in recursive functions; Complete induction
|
Wed 23 Jan.
Lecture 4
|
Datatypes, Patterns, and Lists
One of the most important features of ML is that it allows the definition
of new types with so-called datatype declarations. This means that
programs can be written to manipulate the data in a natural representation
rather than in complex encodings. This goes hand-in-hand with clausal
function definitions using pattern matching on given data types.
We introduce lists and polymorphic datatypes and functions
|
Sun 27 Jan.
Recitation 2.5
|
Fibonacci numbers
|
Mon 28 Jan.
Lecture 5
|
Structural Induction and Tail Recursion
We discuss the method of structural induction on recursively defined
types. This technique parallels standard induction on predicates, but has
a unique character of its own, and arises often in programming.
We also discuss tail recursion, a form of recursion that is somewhat like
the use of loops in imperative programming. This form of recursion is
often especially efficient and easy to analyze. Accumulator arguments play
an important role in tail recursion.
As examples we consider recursively defined lists and trees
|
Tue 29 Jan.
Recitation 3
|
Lists; Equality types
|
Wed 30 Jan.
Lecture 6
|
Higher Order Functions and Staged Computation
We discuss higher order functions, specifically, passing functions as
arguments, returning functions as values, and mapping functions over
recursive data structures. Key to understanding functions as first class
values is understanding the lexical scoping rules.
We discuss staged computation based on function currying
|
Mon 4 Feb.
Lecture 7
|
Data Structures
|
Tue 5 Feb.
Recitation 4
|
Representation Invariants
We demonstrate a complicated representation invariant using Red/Black
Trees. The main lesson is to understand the subtle interactions of
invariants, data structures, and reliable code production. In order to
write code satisfying a strong invariant, it is useful to proceed in
stages. Each stage satisfies a simple invariant, and is provably
correct. Together the stages satisfy the strong invariant
|
Wed 6 Feb.
Lecture 8
|
Functors and Substructures
A functor is a parameterized module that acts as a kind of function which
takes zero or more structures as arguments and returns a new structure as
result. Functors greatly facilitate hierarchical organization in large
programs. In particular, as discussed in the next few lectures, they can
enable a clean separation between the details of particular definition and
higher-level structure, allowing the implementation of "generic"
algorithms that are easier to debug and maintain, and that maximize code
reuse
|
Sun 17 Feb.
Recitation 5.5
|
Currying, folding, and mapping
|
Mon 18 Feb.
Lecture 11
|
Exceptions
Exceptions play an important role in the system of static and dynamic
checks that make SML a safe language. Exceptions are the first type of
effect that we will encounter; they may cause an evaluation to be
interrupted or aborted. We have already seen simple uses of exceptions in
the course, primarily to signal that invariants are violated or
exceptional boundary cases are encountered. We now look a little more
closely at what exceptions are and how they can be used. In addition to
signaling error conditions, exceptions can sometimes also be used in
backtracking search procedures or other patterns of control where a
computation needs to be partially undone
|
Tue 19 Feb.
Recitation 6
|
Tail Recursion vs Continuations
|
Wed 20 Feb.
Lecture 12
|
n-Queens
The same problem can be solved using several, very different, techniques.
We examine a classic puzzle, the n-queens problem, and compare solutions
that use exceptions, continuations and functions that return options
|
Sun 24 Feb.
Recitation 6.5
|
Midterm review
|
Mon 25 Feb.
Review
|
Midterm review
|
Tue 26 Feb.
Recitation 7
|
Midterm review
|
Wed 27 Feb.
Midterm
|
Midterm
|
Sun 2 Mar.
Recitation 7.5
|
TBA
|
Mon 3 Mar.
Lecture 13
|
Mutation and State
The programming techniques used so far in the course have, for the most
part, been "purely functional". Some problems, however, are more naturally
addressed by keeping track of the "state" of an internal
machine. Typically this requires the use of mutable storage. ML supports
mutable cells, or references, that store values of a fixed type. The value
in a mutable cell can be initialized, read, and changed (mutated), and
these operations result in effects that change the store. Programming with
references is often carried out with the help of imperative
techniques. Imperative functions are used primarily for the way they
change storage, rather than for their return values
|
Tue 4 Mar.
Recitation 8
|
Ascription, where, and functors
|
Wed 5 Mar.
Lecture 14
|
Ephemeral Data Structures
Previously, within the purely functional part of ML, we saw that all
values were persistent. At worst, a binding might shadow a previous
binding. As a result our queues and dictionaries were persistent data
structures. Adding an element to a queue did not change the old queue;
instead it created a new queue, possibly sharing values with the old
queue, but not modifying the old queue in any way.
Now that we are able to create cells and modify their contents we can
create ephemeral data structures. These are data structures that change
over time. The main advantage of such data structures is their ability to
maintain state as a shared resource among many routines. Another advantage
in some cases is the ability to write code that is more time-efficient
than purely functional code. The disadvantages are error and complexity:
our routines may accidentally and irreversibly change the contents of a
data structure; variables may be aliases for each other. As a result it is
much more difficult to prove the correctness of code involving ephemeral
data structures. As always, it is a good idea to keep mutation to a
minimum and to be careful about enforcing invariants.
We present two examples. First, we consider a standard implementation of
hash tables. We use arrays to implement generic hash tables as a functor
parameterized by an abstract hashable equality type. Second, we revisit
the queue data structure, now defining an ephemeral queue. The queue
signature clearly indicates that internal state is maintained. Our
implementation uses a pair of reference cells containing mutable lists,
and highlights some of the subtleties involved when reasoning about
references
We end the lecture with a few words about ML's value restriction. The
value restriction is enforced by the ML compiler in order to avoid runtime
type errors. All expressions must have well-defined lexically-determined
static types
|
Sun 9 Mar.
Recitation 8.5
|
TBA
|
Mon 10 Mar.
Lecture 15
|
Streams, Demand-Driven Computation
Functions in ML are evaluated eagerly, meaning that the arguments are
reduced before the function is applied. An alternative is for function
applications and constructors to be evaluated in a lazy manner, meaning
expressions are evaluated only when their values are needed in a further
computation. Lazy evaluation can be implemented by "suspending"
computations in function values. This style of evaluation is essential
when working with potentially infinite data structures, such as streams,
which arise naturally in many applications. Streams are lazy lists whose
values are determined by suspended computations that generate the next
element of the stream only when forced to do so
|
Tue 11 Mar.
Recitation 9
|
Arrays and mutable state
|
Wed 12 Mar.
Lecture 16
|
Memoization
We continue with streams, and complete our implementation by introducing a
memoizing delay function. Memoization ensures that a suspended expression
is evaluated at most once. When a suspension is forced for the first time,
its value is stored in a reference cell and simply returned when the
suspension is forced again. The implementation that we present makes a
subtle and elegant use of a "self-modifying" code technique with circular
references
|
Mon 24 Mar.
|
No class (Spring Break)
|
Tue 25 Mar.
|
|
Wed 26 Mar.
|
Sun 30 Mar.
Recitation 10.5
|
TBA
|
Mon 31 Mar.
Lecture 19
|
Regular Expressions and Lexical Analysis
Many applications require some form of tokenization or lexical analysis to
be carried out as a preprocessing step. Examples include compiling
programming languages, processing natural languages, or manipulating HTML
pages to extract structure. As an example, we study a lexical analyzer for
a simple language of arithmetic expressions
|
Tue 1 Apr.
Recitation 11
|
Languages
|
Wed 2 Apr.
Lecture 20
|
Grammars
Context-free grammars arise naturally in a variety of applications. The
"Abstract Syntax Charts" in programming language manuals are one
instance. The underlying machine for a context-free language is a pushdown
automaton, which maintains a read-write stack that allows the machine to
"count"
|
Sun 6 Mar.
Recitation 11.5
|
TBA
|
Mon 7 Apr.
Lecture 21
|
Parsing
In this lecture we continue our discussion of context-free grammars, and
demonstrate their role in parsing.
Shift-reduce parsing uses a stack to delay application of rewrite rules,
enabling operator precedence to be enforced. Recursive descent parsing is
another style that uses recursion in a way that mirrors the grammar
productions. Although parser generator tools exist for restricted classes
of grammars, a direct implementation can allow greater flexibility and
better error handling. We present an example of a shift-reduce parser for
a grammar of arithmetic expressions
|
Tue 8 Apr.
Recitation 12
|
TBA
|
Wed 9 Apr.
Lecture 22
|
Evaluation
We now put together lexical analysis and parsing with evaluation. The
result is an interpreter that evaluates arithmetic expressions directly,
rather than by constructing an explicit translation of the code into an
intermediate language, and then into machine language, as a compiler
does. Our first example uses the basic grammar of arithmetic expressions,
interpreting them in terms of operations over the rational numbers.
In this and the next lecture we extend this simple language to include
conditional statements, variable bindings, function definitions, and
recursive functions
|
30 Apr
8:30-11:30 (C008) Final
|
Final
|
Iliano Cervesato |