CMU 15-859(B), Spring 2009
MACHINE LEARNING THEORY
MW 3:00-4:20, Wean 4623
Course description:
This course will focus on theoretical aspects of machine learning. We
will examine questions such as: What kinds of guarantees can one prove
about learning algorithms? What are good algorithms for achieving
certain types of goals? Can we devise models that are both amenable to
mathematical analysis and make sense empirically? What can we say
about the inherent ease or difficulty of learning problems? Addressing
these questions will require pulling in notions and ideas from
statistics, complexity theory, information theory, cryptography, game
theory, and empirical machine learning research.
Grading will be based on
6 homework assignments, class
participation, a small class project, and a take-home final
(worth about 2 homeworks). Students from time
to time will also be asked to help with the grading of
assignments.
[2008 version]
Text:
An Introduction to Computational Learning Theory by Michael Kearns
and Umesh Vazirani, plus papers and notes for topics not in the book.
Office hours:MW 4:30-5:15
Handouts
Lecture Notes & tentative plan
- 01/12: Introduction. PAC model and Occam's
razor.
- 01/14: The Mistake-Bound model. Combining
expert advice. Connections to info theory and game theory.
- 01/21: The Winnow algorithm.
- 01/26: The Perceptron Algorithm, Margins, and Kernel functions.
- 01/28: Uniform convergence, tail
inequalities (Chernoff/Hoeffding), VC-dimension I.
[more notes]
- 02/02: VC-dimension II.
- 02/04: Rademacher bounds and McDiarmid's inequality.
- 02/09: Boosting I: weak vs strong learning, basic issues.
- 02/11: Boosting II: Adaboost + connection
to WM analysis + L_1 margin bounds.
- 02/16: Support Vector Machines, properties
of kernels, MB=>PAC, L_2 margin bounds.
- 02/18: Margins, kernels, and general
similarity functions (L_1 and L_2 connection).
- 02/23: Cryptographic hardness results.
- 02/25: Maxent and maximum-likelihood exponential models. Connection to winnow.
- 03/02: Statistical Query model I.
- 03/04: Statistical Query model II.
- 03/16: Fourier-based algorithms.
- 03/18: Membership Query algorithms.
- 03/23: Membership Query algorithms II.
- 03/25: Learning finite-state environments.
- 03/30: Learning finite-state environments II.
- 04/01: MDPs and reinforcement learning.
- 04/06: Offline->online optimization.
- 04/08: Bandit problems.
- 04/13: Active learning [Steve Hanneke]
- 04/15: [class cancelled today]
- 04/20: online learning and game theory.
- 04/22: Semi-supervised learning.
- 04/27: Project presentations
- 04/29: Project presentations
Additional Readings & More Information
Books and tutorials:
- O. Bousquet, S. Boucheron, and G. Lugosi, Introduction
to Statistical Learning Theory.
- PASCAL video lectures.
- N. Cristianini and J. Shawe-Taylor,
Kernel
Methods for Pattern Analysis, 2004.
- N. Cristianini and J. Shawe-Taylor,
An Introduction to Support
Vector Machines (and other kernel-based learning methods), 2000.
- M. Anthony and P. Bartlett. Learning in Neural Networks :
Theoretical Foundations. Cambridge
University Press, 1999.
- V. Vapnik. Statistical Learning Theory. Wiley, 1998.
- L. Devroye, L. Györfi, G. Lugosi, A Probabilistic Theory of
Pattern Recognition, Springer, New York, 1996.
- My FOCS'03 tutorial on Machine
Learning Theory
Online Learning:
- Nick Littlestone, Learning Quickly
when Irrelevant Attributes Abound: A New Linear-threshold Algorithm.
Machine Learning 2:285--318, 1987. (The version pointed to
here is the tech report UCSC-CRL-87-28.)
This is the paper that first defined the Mistake-bound model, and
also introduced the Winnow algorithm. A great paper.
- Littlestone and Warmuth,
The Weighted Majority Algorithm.
Information and Computation 108(2):212-261, 1994.
Introduces the weighted majority algorithm, along with a number of
variants. Also a great paper.
- Nicolo Cesa-Bianchi, Yoav Freund, David Haussler, David
Helmbold, Robert Schapire, and Manfred Warmuth, How
to use expert advice, Journal of the ACM, 44(3):427-485, May 1997.
Yoav Freund and Robert Schapire, Adaptive game playing using
multiplicative weights, Games and Economic Behavior, 29:79-103,
1999.
Continuing on with line of research in the [LW] paper, these give
tighter analyses of multiplicative-weighting expert algorithms and
give a game-theoretic perspective, as well as address a number of other issues.
- Adam Kalai and Santosh Vempala, Efficient algorithms for the online decision problem, COLT '03. Martin Zinkevich,
Online convex
programming and generalized infinitesimal gradient ascent, ICML '03.
These papers give efficient algorithms for a broad class of settings that one
can view as having exponentially many "experts", but which are represented
in an implicit compact way.
- Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, Robert Schapire: The Nonstochastic Multiarmed Bandit Problem, SIAM J. Comput. 32(1): 48-77 (2002).
Brendan McMahan and Avrim Blum: Online Geometric Optimization
in the Bandit Setting Against an Adaptive Adversary, COLT '04.
Abie Flaxman, Adam Tauman Kalai, and Brendan McMahan: Online
Convex Optimization in the Bandit Setting: Gradient Descent Without a
Gradient, SODA '2005.
These papers extend above results to the
bandit setting, in which only the loss or gain of the action actually
played can be observed at each time step.
- Survey articles:
Avrim Blum, On-Line
Algorithms in Machine Learning. From "Online Algorithms: the
state of the art", Fiat and Woeginger eds., LNCS #1442, 1998.
Avrim Blum and Yishay Mansour, Learning, Regret Minimization, and Equilibria, Chapter 4 in "Algorithmic Game Theory," Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay Vazirani, eds. (2007).
PAC sample complexity:
- David Haussler Chapter on PAC learning model, and decision-theoretic generalizations, with applications to neural nets. From Mathematical
Perspectives on Neural Networks, Lawrence Erlbaum Associates, 1995, containing reprinted material from "Decision Theoretic
Generalizations of the PAC Model for Neural Net and Other Learning Applications", Information and Computation, Vol. 100,
September, 1992, pp. 78-150. This is a really nice survey of the PAC
model and various sample-complexity results.
- David Williamson, John Shawe-Taylor, Bernhard Schölkopf, Alex
Smola Sample
Based Generalization Bounds. Gives tighter generalization bounds
where instead of using "the maximum number of ways of labeling a set of 2m
points" you can use "the number of ways of labeling your actual sample".
Boosting:
More on Kernels:
Fourier analysis, weak learning, SQ learning:
- Avrim Blum, Merrick Furst, Jeffrey Jackson, Michael Kearns, Yishay Mansour, and Steven Rudich, Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis, STOC '94 pp. 253--262.
- Y. Mansour. Learning
Boolean Functions via the Fourier Transform. Survey article in
``Theoretical Advances in Neural Computation and Learning", 391--424
(1994).
- A. Blum, C. Burch, and J. Langford, On
Learning Monotone Boolean Functions. Proceedings of the
39th Annual Symposium on Foundations of Computer Science (FOCS '98).
- V. Feldman, P. Gopalan, S. Khot, A. Ponnuswami. New Results for Learning Noisy Parities and Halfspaces, FOCS 2006.
Computational hardness results:
Web sites on maxent: