August 2016 | ||||||
---|---|---|---|---|---|---|
U | M | T | W | R | F | S |
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 | 31 | |||
Mon 19 Sep Lab 3 |
Time of Your Life
This lab practices testing and timing running code to estimate its complexity.
|
Tue 20 Sep Lecture 7 |
Binary search
When searching for a value in a sorted array, examining the middle
element allows us to discard half of the array in the worst case.
The resulting algorithm, binary search, has logarithmic complexity
which is much better than linear search (which is linear).
Achieving a correct imperative implementation can be tricky however,
and we use once more contracts as a mechanism to reach this goal.
|
Thu 22 Sep Lecture 8 |
Quicksort
We use the key idea underlying binary search to implement two sorting
algorithms with better complexity than selection sort. We examine
one of them, quicksort, in detail, again using contracts to achieve
a correct implementation, this time a recursive implementation. We
observe that the asymptotic complexity of quicksort depends on the
the value of a quantity the algorithm use (the pivot) and discuss
ways to reduce the chances of making a bad choice for it. We
conclude by examining another sorting algorithm, mergesort, which is
immune from this issue.
|
Fri 23 Sep Recitation 4 |
A Strange Sort of Proof
This recitation reviews proving the correctness of functions.
|
Mon 10 Oct Lab 6 |
List(en) Up!
This lab practices working with linked lists.
|
Tue 11 Oct Lecture 12 |
Unbounded Arrays
When implementing a data structure for a homogeneous collection,
using an array has the advantage that each element can be accessed
in constant time, but the drawback that we must fix the number of
elements a priori. Linked lists can have arbitrarily length but
access takes linear time. Can we have the best of both worlds?
Unbounded arrays rely on an array to store the data, but double it
when we run out of place for new elements. The effect is that
adding an element can be either very cheap or very expensive
depending on how full the array is. However, a series of insertions
will appear as if each one of them takes constant time in average.
Showing that this is the case requires a technique called amortized
analysis, which we explore at length in this lecture.
|
Thu 13 Oct Lecture 13 |
Hash Tables
Associative arrays are data structures that allow efficiently
retrieving a value on the basis of a key: arrays are the special
case where valid indices into the array are the only possible keys.
One popular way to implement associative arrays is to use hash
tables, which computes an array index out of each key and uses that
index to find the associated value. However, multiple keys can map
to the same index, something called a collision. We discuss several
approaches to dealing with collisions, focusing on one called separate
chaining. The cost of access depends on the contents of the hash
table. While a worst case analysis is useful, it is not typically
representative of normal usage. We compute the average case
complexity of an access relative to as few simple parameters of the
hash table.
|
Fri 14 Oct Recitation 7 |
Array Disarray
This recitation practices coding to achieve amortized cost.
|
Mon 14 Nov Lab 11 |
Once you C1 you C them all
This lab practices using translating C0 code to C and managing
memory leaks.
|
Tue 15 Nov Lecture 21 |
C's Memory Model
C provides a very flexible view of memory, which allows writing
potentially dangerous code unless one is fully aware of the
consequences of one's decision. This lecture is about building this
awareness. We see that, while C overlays an organization on the
monolithic block of memory the operating systems assigns to a
running program, it also provides primitives that allow violating
this organization. We focus on two such primitives, pointer
arithmetic and address-of. While some uses are legitimate, others
are not. C's approach to many non-legitimate operations is to
declare them undefined, which means that what happens when a program
engages in them is decided by the specific implementation of the C
compiler in use.
|
Thu 17 Nov Lecture 22 |
Types in C
In this lecture, we examine how C implements basic types, and what
we as programmers need to be aware of as we use them. We begin with
strings, that in C are just arrays of characters with
the null character playing a special role. A variety of
number types are available in C, but some of their characteristics
are not defined by the language, most notably their size and what
happens in case of overflow. As a consequence, different compilers
make different choices, which complicates writing code that will
work on generic hardware.
|
Fri 18 Nov Recitation 11 |
C-ing is Believing
This recitation practices advanced C constructions.
|
Mon 28 Nov Lab 13 | waiting on id lab13... |
Tue 29 Nov Lecture 24 | waiting on id lecture24... |
Thu 1 Dec Lecture 25 | waiting on id lecture25... |
Fri 2 Dec Recitation 12 | waiting on id recitation12... |
Mon 5 Dec Lab 14 | waiting on id lab14... |
Tue 6 Dec Lecture 26 | waiting on id lecture26... |
Thu 8 Dec Lecture 27 | waiting on id lecture27... |
Fri 9 Dec Recitation 13 | waiting on id recitation13... |
Thu 15 Dec (5:30-8:30) [rooms TBA] final | waiting on id FINAL... |
2016 Iliano Cervesato |