Date |
Topic |
Supplementary Reading and Notes |
Weeks 1-2 |
Basic Concepts and Terminology
Maximum likelihood vs. Bayesian inference,
predictive distributions, sufficient statistics,
Cramer-Rao bounds, normal and
non-normal approximations, exponential families
|
Tanner, Chapters 1-2 |
Weeks 3-4 |
Approaches to Statistical Inference
Informative vs. discriminative methods,
linear regression and decision boundaries, incomplete
and augumented data, the need for greedy algorithms
and sampling, bias/variance tradeoffs
|
Chapter 3 |
Weeks 5-6 |
The EM Algorithm
EM for exponential families,
ECM and other GEMs,
acceleration techniques,
defficiency,
bootstrapping, Monte Carlo E-step,
EM and iterative scaling
|
Chapter 4 |
Weeks 7-8 |
Data Augmentation and
Markov Chain Monte Carlo Algorithms
Convergence theory,
sampling and importance resampling,
Gibbs sampling,
Metropolis algorithm,
coupling and exact sampling methods,
conductance and convergence rates
|
Chapter 5-6 |
Weeks 9-11 |
Techniques for Supervised and
Unsupervised Learning
CART and minimum impurity partitions,
additive models and boosting,
maximum entropy and discriminative methods,
hard and soft clustering,
approximate inference algorithms for clustering,
dimension reduction techniques
|
See readings page |
Weeks 12-13 |
Additional Topics from Information Theory
and Statistics
The method of types,
information geometry and alternating projection,
universal prediction and compression,
large deviations
|
See readings page |