CMU 15-859(B), Spring 2012
MACHINE LEARNING THEORY
MW 1:30-2:50, GHC 4303
Course description:
This course will focus on theoretical aspects of machine learning. We
will examine questions such as: What kinds of guarantees can we prove
about learning algorithms? Can we design algorithms for interesting
learning tasks with strong guarantees on accuracy and amounts of data
needed? What can we say about the inherent ease or difficulty of
learning problems? Can we devise models that are both amenable to
theoretical analysis and make sense empirically? Addressing these
questions will bring in connections to probability and statistics,
online algorithms, game theory, complexity theory, information theory,
cryptography, and empirical machine learning research.
Grading will be based on
6 homework assignments, class
participation, a small class project, and a take-home final
(worth about 2 homeworks). Students from time
to time will also be asked to help with the grading of
assignments.
[2010 version of the course]
Prerequisites: A Theory/Algorithms background or a Machine
Learning background.
Text (recommended):
An Introduction to Computational Learning Theory by Michael Kearns
and Umesh Vazirani, plus papers and notes for topics not in the book.
Office hours: Email / stop by anytime!
Handouts
Lecture Notes & tentative plan
- 01/18: Introduction. PAC model and Occam's
razor.
- 01/23: The Mistake-Bound model. Combining
expert advice. Connections to info theory.
- 01/25: The Winnow algorithm.
- 01/30: The Perceptron Algorithm, margins,
& intro to kernels plus Slides.
- 02/01: Uniform convergence, tail
inequalities (Chernoff/Hoeffding), VC-dimension I.
[more notes]
- 02/06: VC-dimension II (proofs of main theorems).
- 02/08: Boosting I: Weak to strong learning,
Schapire's original method.
- 02/13: Boosting II: Adaboost + connection
to WM analysis + L_1 margin bounds
- 02/15: Rademacher bounds and McDiarmid's inequality.
- 02/20: MB=>PAC, Support Vector Machines,
L_2 margin uniform-convergence bounds.
- 02/22: Margins, kernels, and general
similarity functions (L_1 and L_2 connection).
- 02/27: Active learning.
- 02/29: Semi-supervised learning.
- 03/05: Learning and property testing.
- 03/19: Learning with noise and the Statistical Query model I.
- 03/21: Statistical Query model II:
characterizing weak SQ-learnability.
- 03/26: Fourier-based learning and learning
with Membership queries: the KM algorithm.
- 03/28: Fourier spectrum of decision trees
and DNF. Also hardness of learning parities with kernels.
- 04/02: Learning Finite State
Environments.
- 04/04: MDPs and reinforcement learning.
- 04/09: Maxent and maximum likelihood exponential
models; connection to winnow
- 04/11: Offline->online optimization.
- 04/16: Bandit algorithms and sleeping experts.
- 04/18: Game theory I (zero-sum and
general-sum games). [paper on internal vs external regret ]
- 04/23: Game theory II (achieving low
internal/swap regret. Congestion/exact-potential games.
- 04/25: Transfer learning.
- 04/30: Project presentations
- 05/02: Project presentations
Additional Readings & More Information
- Nick Littlestone Learning Quickly when Irrelevant Attributes Abound: A new Linear-threshold Algorithm. This is the Winnow paper from 1987, which also discusses a number of aspects of the online mistake bound model.
- The Adaboost paper: Yoav Freund and Rob Schapire,
A decision-theoretic generalization of on-line learning and an application to boosting, Journal of Computer and System Sciences, 55(1):119-139, 1997.
- Robert Williamson, John Shawe-Taylor, Bernhard Scholkopf, Alex
Smola
Sample
Based Generalization Bounds. Gives tighter generalization bounds
where instead of using "the maximum number of ways of labeling a set of 2m
points" you can use "the number of ways of labeling your actual sample".
- Maria-Florina Balcan, Avrim Blum, and Nathan Srebro Improved Guarantees for Learning via Similarity Functions. Gives formulation and analysis for learning with general similarity functions. Also shows that for any class of large SQ dimension, there cannot be a kernel that has large margin even for all (or even a non-negligible fraction) of the functions.
- Avrim Blum, Merrick Furst, Jeffrey Jackson, Michael Kearns, Yishay
Mansour, and Steven Rudich Weakly
Learning DNF and Characterizing Statistical Query Learning
Using Fourier Analysis. Defines and analyzes SQ
dimension. Also weak-learning of DNF via fourier analysis.
- See also a previous version of
the course.