Lectures: Mo,We 14:30 - 15:50 (room C008)
Recitations: Tu 14:00 - 14:50 (room C010)
Class Webpage:
http://qatar.cmu.edu/cs/15312
Instructor: |
Iliano Cervesato
Office hours: by appointment
Office: LAS A128
Email:
|
Co-instructor: |
Thierry Sans
Office hours: by appointment
Office: LAS behind A128A
Email:
|
Click on a class day to go to that particular lecture or recitation.
January |
February |
March |
April |
May |
U | M | T | W | R | F | S |
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
|
|
0 |
|
|
|
|
|
|
|
|
U | M | T | W | R | F | S |
|
|
|
|
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
|
|
|
|
|
|
U | M | T | W | R | F | S |
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
|
|
|
0 |
|
|
|
|
|
|
|
U | M | T | W | R | F | S |
|
|
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
30 |
31 |
|
|
|
|
|
|
|
Hw1 |
Hw2 |
Hw3 |
Hw4 |
Midterm |
Hw5 |
Hw6 |
Hw7 |
Hw8 |
Final |
Posted |
16 Jan |
23 Jan |
06 Feb |
13 Feb |
05 Mar |
12 Mar |
19 Mar |
09 Apr |
16 Apr |
5 May (9:00) C008 |
Due |
23 Jan |
06 Feb |
13 Feb |
27 Feb |
19 Mar |
09 Apr |
16 Apr |
23 Apr |
Corrected |
28 Jan |
11 Feb |
18 Feb |
03 Mar |
10 Mar |
24 Mar |
14 Apr |
21 Apr |
30 Apr |
07 May |
About this course
[ Description
| Prerequisites
| Readings
| Software
| Grading
| Assessment
]
This course has the purpose of exposing students who have mastered advanced
programming techniques and concepts to some of the foundational principles
that underly the very programming languages they have been using. These same
principles pervade many disciplines in and beyond Computer Science and can be
found any time one needs to give and work with a representation of some
domain. More specifically,
- You will see that a (good) programing language is not an ad-hoc collection
of constructs, but a mathematical object whose external features
(including expressiveness and usability) are the necessary manifestation
of intrinsic properties. We will use judgments and derivations as a
universal vehicle to talk and reason about language constructs.
- You will learn some of these general design principles, for example the
use of types as an organizing principle, safety proofs as a measure of
correctness, and the orthogonality of constructs, and study how they
apply to the most common programming mechanisms, such as functions,
records, variants and recursion, as well as to more specialized or
esoteric concepts, such as polymorphism, exceptions, inheritance and
concurrency. This will provide you with the tools to knowledgeably design
your own language if the occasion arises.
- You will see that these same principles can be used to derive efficient
and correct implementations techniques for a language. In particular,
we will be able to establish correctness mathematically.
This course will be coordinated with the edition currently offered on the
main campus, taught by Professor Robert Harper. The material presented
and the homeworks will be roughly the same.
You must have completed CS 15-212
(Principles of Programming)
The course will closely follow Harper's book. Note
that it is work in progress and is being continuously updated.
Further References
The course has a programming component, mainly in the form of 4 programming
assignments. Students are allowed to use any programming language they want
to develop their solution to these assignments. The only requirements are
that the solution work as per the text of the assignment, be understandable to
the instructors, and that the student be able to explain it. Said this, some
programming languages will make the task simpler than others. In particular,
using Standard ML (SML)
and similar languages, or Twelf and similar languages, is likely to
get you a working solution in a much shorter time than, say, Java or C.
SML
A reference build of Standard
ML of New Jersey (SML/NJ), version 110.65, and Concurrent ML (CML) have
been made available on the Unix clusters. To run it, you need to login into
your Unix account. In Windows, you do this by firing PuTTy and specifying
unix.qatar.cmu.edu as the machine name. When the PuTTy window
comes up, type sml, do your work, and then hit CTRL-D when you
are done.
You can edit your files directly under Unix (the easiest way is to run the
X-Win32 utility from Windows and then run the Emacs editor from the PuTTy
window by typing emacs - see also this tutorial). If
you want to do all this from your own laptop, you first need to install
X-Win32 from here.
PuTTy is pre-installed in Windows.
If you want, you can install a personal copy of SML/NJ on your laptop. To do
this, download this
file and follow these
instructions Personal copies are for your convenience: all ML programs
will be evaluated on the reference environment on
unix.qatar.cmu.edu. You need to make sure that your homework
assignments work there before submitting them. To do so, you need to transfer
your files onto unix.qatar.cmu.edu and test them there. You
can do so by using the PSFTP utility which comes with PuTTy (or any of the
many more user-friendly FTP front-ends).
Documentation
Useful documentation can be found on the SML/NJ web site. The following files will be
particularly useful:
Twelf
A reference build of the Twelf specification environment has also
been made available on the Unix clusters and is accessed similarly to SML/NJ.
The easiest way to use it is within the Emacs editor. Alternatively, you can
install a personal copy on your laptop. Downloads, documentation and examples
can be found on the Twelf wiki (it
supercedes the Twelf web page).
Trying out Twelf or any other language is likely to get you bonus points
Tasks and Percentages
- 8 homework assignments: 50%
- 4 written assignments
(# 1,
3,
5,
7)
- 4 programming assignments
(# 2,
4,
6,
8)
- Handed out on Tuesdays
- Due on Tuesday 14 (resp. 7) days later at 7:59am Doha time (6:59am
after March 11). To submit, log onto
unix.qatar.cmu.edu and copy assignment n into
directory
/afs/qatar.cmu.edu/course/15/213/handin/<username>/hwn/
- No joint assignments
- Midterm exam: 20%, in class on February
21, open books
- Final exam: 30%, 3 hours, open books
Evaluation Criteria
Your assignments and exams are evaluated on the basis of:
- Correctness: your arguments should make sense, your proofs should
be valid, and your program should work in the reference
environment
- Specification: say what you want to do before doing it. In the
case of programs, use structured comments describing types, meaning of
the returned value, invariants, and side-effects
- Elegance: written material should be of the same quality as what a
professional would write. No typos, no bad grammar, clarity is paramount.
See these notes about
ML programming style
- Bonus points: up to 10% for particularly elegant
solutions
- Negative points: up to 100% if caught
cheating
Because this course is coordinated with the edition offered in
Pittsburgh, the grades of individual homeworks and exams, as well as the
final grade, will be uniformed to the performance of that class.
Late Policy
Every student has up to 3 late days that may be used for any assignment
throughout the semester, but no homework may be more than two days late (this
is so that we can discuss assignments in lecture the Wednesday after they are
due). No fractional late days: if you submit 1 minute late, you have used up
a full late day.
Academic Integrity
You are expected to comply with the University Policy
on Academic Integrity and Plagiarism.
Collaboration is regulated by the whiteboard policy: you can bounce
ideas about an assignment, but when it comes to typing it down for submission,
you are on your own - no notes, snapshots, etc., you can at most reconstruct
the reasoning from memory.
Course Objectives
This course seeks to develop students who:
- demonstrate a high level of proficiency in the fundamentals of programming
languages, namely
- are able to critically understand and analyze programming languages
and their constructs
- are able to learn and apply programming languages quickly
- are able to analyze, compare, and choose the appropriate paradigm for
a wide variety of computational tasks
- are able to approach or think about problems
mathematically, are familiar with the mathematics that relate directly
to the field of programming languages, and are able to master new
mathematical concepts that arise in the context of their work
- master fundamental, advanced, and recent concepts in the field of
programming languages
- think clearly about tangible problems and create innovative solutions
relying on proven techniques such as abstraction, decomposition,
iteration and recursion, inductive and deductive
thinking, and know the limits of computation
- communicate orally and in writing in effective and appropriate ways within
the discipline of programming languages, namely
- are able to understand and articulate technical ideas
- are able to follow and form cogent arguments
Learning Outcomes
Upon successful completion of this course, students will:
- know the basics of the theoretical foundations of programming languages
and be able to evaluate languages, easily learn additional language,
and even design new languages. Namely, students will
- be able to extrapolate the concrete syntax of a particular language
and assess constructs abstractly independently of the syntax they
are written in
- be able to discuss the semantics of a construct and describe it
semi-formally and formally
- appreciate the distinction between static and dynamic semantics
- be familiar with the standard assessment tools for programming
languages, in particular type safety theorems, and be able to
carry out a proof
- understand the main concepts in programming languages, namely:
- the difference between an interpreted and a compiled language
- the degree of abstraction at which a language sits
- the standard control flow mechanisms, including sequential execution,
branching, loops, recursion and function invocation
- types as an organizing principle and an abstraction mechanism for
data
- the most common mechanisms for code reuse including functions, modules
and libraries
- have a clear understanding of the mechanisms underlying both imperative
and non-imperative languages. Specifically, they will
- understand the standard and emerging constructs found in imperative
programming languages such as conditionals, loops, functions,
polymorphism, and exceptions
- understand the various principles underlying the object-oriented
paradigm, including encapsulated objects, classes, and
inheritance
- have familiarity with a functional language and functional programming
concepts, in particular recursion, higher-order functions,
continuations, and functional modules
- have had exposure to some of the paradigms for distributed and
concurrent programming, with emphasis on the concepts of threads
of computation, state change, synchronous and asynchronous
communication
- understand basic logic and proof techniques necessary to create and
understand a formal proof. Specifically, they will be able to
- apply formal methods of symbolic propositional and predicate logic
- describe the basic structure of and give examples of the following
proof techniques: direct proof, proof by contrapositive, proof by
contradiction, mathematical induction
- discuss which type of proof is best for a given problem
- relate the ideas of mathematical induction to recursion and
recursively defined structures
-
- identify the differences between mathematical induction and structural
induction and give examples of the appropriate use of each
- identify and correct flawed logic used in language design
- be able to communicate clearly and effectively ideas, concepts and
intentions within the field of programming languages, namely
- be able to describe technical constructs (concepts) clearly, so as to
be readily understood by their peers
- be able to give an individual presentation on a technical subject to
audience of peers within the discipline of programming languages
- form a cogent, logical argument asserting and reiterating all technical
concepts that lie within the bounds of the taught curriculum or their
research within that curriculum.
At a glance ...
|
January |
February |
March |
April |
May |
U | M | T | W | R | F | S |
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
|
|
0 |
|
|
|
|
|
|
|
|
U | M | T | W | R | F | S |
|
|
|
|
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
|
|
|
|
|
|
U | M | T | W | R | F | S |
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
|
|
|
0 |
|
|
|
|
|
|
|
U | M | T | W | R | F | S |
|
|
|
|
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
30 |
31 |
|
|
|
|
|
|
|
Mon 14 Jan.
Lecture 1
|
Welcome and Course Introduction
We outline the course, its goals, and talk about various administrative
issues.
|
Tue 15 Jan.
Recitation 1
|
Judgments, Rules, Derivations
|
Wed 17 Jan.
Lecture 2
|
Inductive Definitions, Hypothetical Judgments
We present a general method to prove properties of derivable judgments.
We also look at derivations lacking a justification for some of judgments
and reify it as the new form of hypothetical judgments. We examine some
elementary properties of these judgments. Finally, we define transition
systems as a special form of judgment.
- Key Concepts:
Rule Induction,
Derivable and Admissible Judgments,
Hypothetical Judgments,
Structural Properties,
Derivability and Admissibility,
Transition Systems
- Readings:
- Handout 1: Inductive Proofs
- Handout 2: Cartoon view of Proofs
|
Mon 21 Jan.
Lecture 3
|
Concrete and Abstract Syntax
We give a judgmental representation of strings, that allow
expressing the concrete syntax of a language and show that the
productions in a context-free grammar are nothing but rules in
disguise. Derivations are then a representation of the intrinsic
structure of a sentence and, once streamlined, yield abstract
syntax trees, an efficient notation for syntactic
expressions.
- Key Concepts:
Grammars,
Abstract Syntax Trees,
Parsing
- Readings:
- Handout: Concrete Syntax
|
Tue 22 Jan.
Recitation 2
|
Substitution, General Judgments
- Key Concepts:
α-conversion,
Substitution,
Structural properties
- Readings:
- Handout: Substitutions
|
Wed 23 Jan.
Lecture 4
|
Binding and Scope
Binding constructs are pervasive in programming (and other) languages.
Because of this, it is convenient to define an infrastructure that allows
to efficiently work with them. Abstract binding trees do precisely that
at the level of syntax, and are just abstract syntax trees when no binders
are present. General judgments are a similar abstraction at the judgment
level. We identify α-conversion and substitution as fundamental
operations associated with binders. Deductive systems that embed both
hypothetical and general judgments form an eminently flexible
representation tool for a large class of languages.
- Key Concepts:
Names and Binders,
Primitive Operations on Names,
General Judgments,
Generalized Rules,
Generalized Inductive Definitions
- Readings:
- Handout: Binding and Scope
|
Mon 28 Jan.
Lecture 5
|
Static and Dynamic Semantics
We define a simple spreadsheet-like language but, unlike spreadsheets,
introduce types to classify atomic objects. Typing rules are introduced
next to classify expressions: they define its static semantics. Execution
rules describe how to evaluate expressions and constitute its dynamic
semantics. We show several approaches to defining the dynamic semantics
of a language, and compare them.
- Key Concepts:
Types,
Static Semantics,
Dynamic Semantics,
Type-Free Execution,
Transition vs. Evaluation Semantics
- Readings:
|
Tue 29 Jan.
Recitation 3
|
Elements of LaTeX
|
Wed 30 Jan.
Lecture 6
|
Type Safety
How do we know that that rules we have defined make sense? We prove it
mathematically. The key results are type preservation (types provide a
track from which execution can never gets off) and progress (execution
always knows what to do next). We trace back the very possibility of
proving these theorems to the interaction between two types of rules,
introduction and elimination forms, from which we extract a general design
principle. We conduct these proofs in the transition semantics and
discuss issues with the evaluation semantics. We conclude by examining
dynamic errors.
- Key Concepts:
Preservation and Progress Theorems,
Introduction and Elimination Forms,
Errors
- Readings:
|
Mon 4 Feb.
Lecture 7
|
Functional Core Language
We define a new language with just (non-recursive) functions and observe
how the static and dynamic semantics play out. This involves the
introduction of closures to handle scoping issues in an environment-based
evaluation semantics.
|
Tue 5 Feb.
Recitation 4
|
Twelf
|
Wed 6 Feb.
Lecture 8
|
Recursion, Iteration, Fixed Points
Obtaining an interesting language with functions and numbers requires
including some form of recursion. We show two approaches: the first,
primitive recursion, includes a recursor that allows to define all and
only the functions that can be obtained through a predetermined number of
iterations (for-loops) which yields necessarily total functions; the
second, general recursion, supports dynamically bounded iterations
(while-loops) and allows possibly partial functions.
- Key Concepts:
Primitive Recursion,
Gödel's System T,
General Recursion,
Plotkin's PCF
- Readings:
- Handout Recursive Functions
|
Mon 11 Feb.
Lecture 9
|
Products and sums
We now examine language that support fixed-length tupling constructs. We
begin with 2-tuple (pairs) and 0-tuple (unit), extend it to generic
tuples, and then define records as labeled tuples and objects as
self-referential records.
We then consider safe languages constructs that allow the same data
structure to represent objects of possibly different types (variants).
The underlying mathematical concept is that of sum. As for products, we
consider, binary, nullary, n-ary and labeled sums. As concrete examples,
we define the type of Booleans and options.
- Key Concepts:
Pairs,
Unit Type,
Tuples,
Records,
Objects;
Binary Sums,
Void Type,
Labeled Sums
- Readings:
- Handout: Sum Types
|
Tue 12 Feb.
Recitation 5
|
Workshop Test Anxiety, by Jumana Abdi
|
Wed 13 Feb.
Lecture 10
|
Recursive Types, Fixed Points
Natural numbers, lists, string have something in common: they all specify
infinite objects, and each is built in the same regular way. This
suggests that there is a common underlying principle that they share.
This is the recursive type construction, which allows to define a type
based on itself. Once this principle is exposed, we have a mechanism to
define our favorite recursive types: not just the above, but also trees,
objects, recursive functions, etc. In this lecture, we examine how
recursive types work and how the machinery they rely upon is hidden in
practical programming languages.
- Key Concepts:
Recursive Types,
Type Equations,
Iso-Recursive Semantics,
Objects (revisited),
Recursive functions (revisited).
- Readings:
- Handout: Recursive Types
|
Mon 18 Feb.
Lecture 11
|
Dynamic Typing
All the languages we have seen so far are typed, and there are good
reasons for this. We look at two untyped languages and show how things
can get nasty quickly without types. The first one is actually not too
bad: the simply typed λ-calculus has just functions, is
Turing-complete, but is not fun to program in. The second is an untyped
version of Plotkin's PCF: we now need to check at run time that operations
make sense. This essentially builds typechecking into the execution
semantics, with loss in performance because we need to check that
expressions are valid all the time - in particular each time we recurse.
- Key Concepts:
Untyped λ-calculus,
Tagged Evaluation.
- Readings:
- Handout: Untyped Languages
|
Tue 19 Feb.
Recitation 6
|
Datatypes
In this lecture, we examine how ML datatypes work. We take lists as an
example and show what hides behind ML's user-friendly syntax.
- Key Concepts:
ML datatypes.
- Readings:
|
Wed 20 Feb.
Lecture 12
|
Type-Directed Optimization
We show how the untyped PCF can be compiled into the typed PCF extended
with errors and a type representing untyped expressions. This exercise
exposes the actual tagging and checking that goes on in an untyped
implementation and makes sources of inefficency evident. This embedding
in a typed framework also provides an opportunity to use the type system
to carry out simple but effective optimizations aimed at mitigating the
overhead of tag creation and checking. In many cases, this can push out
tagging and checking at the functions or module boundaries. We then show
that the type of untyped expressions can be simulated directly within PCF.
- Key Concepts:
Hybrid Typing,
Type-Directed Optimization
- Readings:
|
Mon 25 Feb.
Lecture 13
|
Polymorphism and Generics
We now look at polymorphism, which allows writing universal functions that
work on arguments of any type. We see that polymorphism, although it has
a straightforward definition in terms of universal types, yields
surprising expressive power. Finally, we examine restricted forms of
polymorphism that simplifies typechecking.
- Key Concepts:
Polymorphism,
Universal Types,
Polymorphic Definability,
Impredicativity,
Prenex Fragment
- Readings:
|
Tue 26 Feb.
Recitation 7
|
Polymorphism and Generics
We now look at polymorphism, which allows writing universal functions that
work on arguments of any type. We see that polymorphism, although it has
a straightforward definition in terms of universal types, yields
surprising expressive power. Finally, we examine restricted forms of
polymorphism that simplifies typechecking.
- Key Concepts:
Polymorphism,
Universal Types,
Polymorphic Definability,
Impredicativity,
Prenex Fragment
- Readings:
|
Wed 27 Feb.
Lecture 14
|
Data Abstraction
Powerful module languages can hide the implementation of a function, yet
providing it through a publicized interface. We trace this mechanism down
to existential types, which are in a sense dual to the universal types
that underly polymorphism.
- Key Concepts:
Modularity,
Data Abstraction,
Existential Types.
- Readings:
|
Mon 3 Mar.
Lecture 15
|
Stack Machines
We revisit the semantics of a language and bring it closer to actual
implementations. In particular, we model the pending operations in a
program by pushing them onto a stack and retrieving them when their
operands have been reduced to values. This stack semantics is
particularly useful when we extend our language with constructs that alter
the normal control flow of a language.
- Key Concepts:
Stack Machines,
Soundness and Completeness,
- Readings:
|
Tue 4 Mar.
Recitation 8
|
Midterm review
|
Wed 5 Mar.
Midterm
|
Midterm
|
Mon 10 Mar.
Lecture 16
|
Exceptions: Control and Data
A prime example of using a stack machine is the intoduction of exceptions,
which, when raised, have the effect of unwinding the stack until the next
handler is reached. The same mechanism provides a simple way to handle
run-time errors.
- Key Concepts:
Exceptions,
Handler Stacks,
Failure
- Readings:
|
Tue 11 Mar.
Recitation 9
|
Discussion of the Midterm
|
Wed 12 Mar.
Lecture 17
|
Continuations
Having made the control stack explicit in the description of the semantics
of a language, it is a small step to reify it into a construct of this
language. This is this very primitive that languages supporting
continuations provide. We describe the formal basis of continuations and
show how they are used both to carry out a computation and to recover from
failure.
- Key Concepts:
Success Continuations,
Failure Continuations.
- Readings:
|
Mon 17 Mar.
Lecture 18
|
Curry-Howard Isomorphism
One of the most surprising and far reaching observations in the theory of
programming languages is that nearly all forms of types work in the same
way as logical operators. This is known as the Curry-Howard isomorphism.
With types interpreted as formulas, expressions get interpreted as proofs.
This opens the doors to endless possibilities: proofs (at least in some
logics) have computational content and can be executed; programs can be
generated automatically as proofs of formulas corresponding to
appropriately constrained types; our knowledge of logic can inform our
understanding of programming languages, and vice versa; new logics are
worth exploring in the same way as new programming constructs are.
- Key Concepts:
Implicational Logic,
Formulas as Types,
Proofs as Programs,
Constructive Logic
- Readings:
|
Tue 18 Mar.
Recitation 10
|
Propositions and Types
We examine the relationship between a language featuring solely functions
(and some base type) and implicational logic. We then extend this
parallel to encompass products (seen as conjunction) and sum types (mapped
to conjunctions).
- Key Concepts:
Implicational logic,
Products and Conjunction,
Sums and Disjunction
- Readings:
|
Wed 19 Mar.
Lecture 19
|
Subtyping and Subsumption
It is intuitively appealing to see some types as special cases of other
types, for example an integer as a real number or a 3-D point as a 2-D
point with extra information. This is called subtyping: the ability of
providing a value of the subtype whenever a value of the supertype is
expected, and the formal mechanism that governs it is the rule of
subsumption. As a side-effect, expressions do not have any more a unique
type.
- Key Concepts:
Subtypes,
Supertypes,
Subsumption
- Readings:
|
Mon 24 Mar.
|
No class (Spring Break)
|
Tue 25 Mar.
|
Wed 26 Mar.
|
Mon 31 Mar.
Lecture 20
|
Subtyping and Variance
A subtyping relation at the level of the arguments of a type constructor
propagates to a subtyping relation at the level of the type constructor
itself. The way the resulting relation goes depends on the specific
constructor and on the specific argument. This is called variance:
covariance if the subtyping relation at the constructor level goes in the
same direction as the argument, contravariance if it flow in the opposite
direction.
- Key Concepts:
Covariant arguments,
Contravariant arguments
- Readings:
|
Tue 1 Apr.
Recitation 11
|
Varieties of Subtyping
Subtyping on base types, such as between integers and reals, is generally
achieved by means of coercions that mediate between the different internal
representations. A natural subtyping relation arises within product and
sum types by omitting (or adding) components. More tricky is the
subtyping relation endowed on recursive types.
- Key Concepts:
Base Types and coercion,
Width Subtyping for Records and Sums,
Subtyping Recursive types
- Readings:
|
Wed 2 Apr.
Lecture 21
|
Storage Effects
So far, repeated evaluations of the same expression always produce the
same value: expressions are self-contained and the result depends only on
the text of the expression. A common departure from this model is found
in languages featuring a memory and operations to access and manipulate
it. The basic building block is the reference cell, which significantly
alters the model examined so far, to the point of allowing simulating
recursion.
- Key Concepts:
Memory,
References,
Imperative Programming,
Backpatching
- Readings:
- Pierce:
Ch. 13
|
Mon 7 Apr.
Lecture 22
|
Monadic Storage Effects
The standard treatment of storage effects, discussed in the last
lecture, is driven by the evaluation semantics, in the sense that it is
impossible to know whether an expression will be effectful by looking at
its type alone. Another approach is to mark the possibility of an effect
in the type of expressions. This is achieved using a typing device called
a monad.
- Key Concepts:
Monad,
Computations,
Explicit Effect
- Readings:
|
Tue 8 Apr.
Recitation 12
|
Eagerness and Laziness
Up to this point, the distinction between an eager and a lazy construct
has been a matter of how the evaluation rules were chosen to be define.
Another approach is to lift this distinction to the level of types, so
that the programmer can choose the interpretation based on the type of the
particular construct he/she decides to use. Yet another approach is to
design a type for lazy objects, which can be used within any constructor.
This latter approach is called a suspension.
- Key Concepts:
Lazy Types,
Lazy Computation,
Suspensions
- Readings:
|
Wed 9 Apr.
Lecture 23
|
Call-by-Need
An eager semantics evaluates an argument exactly one (even if it is never
used). A lazy semantics partially evaluates it each time it is
encountered. In each case, there is the risk of doing more work than
strictly necessary. Call-by-need optimizes this process by evaluating an
argument the first time it is encountered but remembering the obtained
value for future uses.
- Key Concepts:
Call-by-Need,
Memoization,
On-Demand Computation
- Readings:
|
Mon 14 Apr.
Lecture 24
|
Speculative Parallelism
The mechanism supporting extreme laziness in a call-by-need semantics is
readily adapted to enable extreme eagerness in a setting supporting
parallel executions: rather than waiting until the value of an argument is
needed, the idea is to evaluate it speculatively, in parallel with other
such evaluations, so that it is there the first time it is needed.
- Key Concepts:
Parallelism,
Speculative Execution,
Futures
- Readings:
|
Tue 15 Apr.
Recitation 13
|
Work-Efficient Parallelism
Effect-free programming languages provide plenty of opportunities for
the parallel evaluation of subcomputations. Specifically, unless an
evaluation depends on the result of another, they can be executed in
parallel. Dependencies and theorical execution time can be measured by
equipping the evaluation semantics with an abstract notion of cost, which
is then used to calculate useful figures such as the number of steps in a
purely sequential setup (work) and the time in a maximally parallel
environment (depth).
- Key Concepts:
Parallel vs. Sequential Evaluation,
Control Dependency,
Cost Semantics,
Work and Depth
- Readings:
|
Wed 16 Apr.
Lecture 25
|
Resource-Bound Parallelism
In practice, the number of parallel evaluations is limited by the number
of available processing units. The expected execution time depends on the
underlying architecture, but can be estimated accurately using Brent's
Theorem in common instances.
- Key Concepts:
Processing Resources,
Symmetric Multiprocessor,
Fetch-and-add Instruction,
Brent's Theorem,
Data Parallelism
- Readings:
|
Mon 21 Apr.
Lecture 26
|
Concurrency
Expressions so far have been self-contained: they were evaluated from an
initial state to a final value without interaction with the external
world. The possibility of interaction gives rise to the notion of a
process, with synchronization actions causing run-time events. A natural
next step consists in considering the interactions among several processes
(one of them possibly modeling the external world), hence capturing
distributed system.
- Key Concepts:
ACtions and Events,
Structural Congruences,
Concurrent Interaction,
Replication
- Readings:
|
Tue 22 Apr.
Recitation 14
|
Process Calculi
Allowing actions to send or receive data promotes synchronization to a
communication mechanism. This becomes particularly powerful when
communication channels are themselves seen as data and a mechanism is
provided to create them on the fly: this gives rise to the notion of
private channel. The semantics of communicating processes is open-ended
in the sense that several alternative models coexists. Among them is the
decision on whether a sending process should wait till its message has
been received, or proceed with its computation.
- Key Concepts:
Private Channels,
Scope Extrusion,
Synchronous Communication,
Asynchronous Communication
- Readings:
|
Wed 23 Apr.
Review
|
Final review
|
Mon 05 May 9:00-12:00 (C008)
Final
|
Final
|
|