15-681 Course Outline (Fall 1996)
- Introduction:
An illustrative learning task, and a few approaches to it.
What is known from algorithms, theory, experiment.
- Concept Learning: learning as search, version spaces,
choosing examples, inductive bias.
- The PAC model: Consistency and PAC learning.
Example algorithms and proofs, general relations, decision lists,
Occam's razor.
- Decision Tree Learning: information gain, post-pruning,
extracting rules.
- Evaluating Hypotheses: sampling errors, confidence
intervals, probabilistic inequalities, cross validation.
- On-Line Learning: The mistake-bound model, Winnow and
Weighted-Majority algorithms, applications to game playing.
- Neural Network Learning: Perceptrons, gradient descent,
multilayer nets, backpropagation.
- Overfitting and Bias II: uniform convergence, VC-dimension.
- Bayesian Approaches: Bayes theorem, Maximum likelihood,
Minimum description length principle, Naive Bayes.
- Instance-based techniques: Nearest neighbor methods.
- Statistical Query model: learning with noise, Fourier
analysis.
- Genetic Algorithms: Evolutionary computation, genetic programming
- Explanation-Based Learning:
Using prior knowledge to reduce sample complexity.
- Combined inductive/analytical learning:
Symbolic methods,
Neural network methods.
Bayesian networks
- Hidden Markov Models: Expectation Maximization.
- Learning Agents:
Reinforcement learning,
Agent architectures for learning (e.g., Soar),
Learning finite state environments and feature invention.