15-681 MACHINE LEARNING. Fall, 1994
Instructors:
Tom Mitchell,
Avrim Blum
Teaching Assistant: Jefferey Shufelt
When: Tues,Thurs 10:30-11:50
Where: PH A19
Note: Begins Aug 30, 1994.
Link to online handouts and other
info.
GENERAL DESCRIPTION
Machine Learning is concerned with computer programs that automatically
improve their performance through experience. Machine Learning methods
have been applied to problems such as learning to drive an autonomous
vehicle, learning to recognize human speech, and learning strategies for
game playing. This course covers the primary approaches to machine
learning from a variety of fields, including inductive inference of
decision trees, neural network learning, statistical learning methods,
genetic algorithms, and explanation-based learning. The course will also
cover theoretical concepts such as inductive bias, the PAC learning
framework, reductions among learning problems, and Occam's Razor.
Programming assignments include experimenting with various learning
problems and algorithms. This course is a combination upper-level
undergraduate and introductory-level graduate course.
TENTATIVE OUTLINE
INTRODUCTION
An illustrative learning task, and a few approaches to it
What is known from algorithms, theory, experiment, biology,
psychology
CONCEPT LEARNING
Version spaces
Inductive Bias
Active queries
Mistake bound/PAC model - basic results
Overview of issues regarding data sources, success criteria
DECISION TREE LEARNING
Minimum Description Length Principle
Occam's Razor
Learning with active queries
NEURAL NETWORK LEARNING
Perceptrons and gradient descent
Backpropagation
SAMPLE COMPLEXITY AND OVERFITTING
Errors in estimating means
Cross Validation and Jacknifing
VC dimension
Irrelevant features: Multiplicative rules for weight tuning
BAYESIAN APPROACHES
The basics
Expectation Maximization
Hidden Markov Models
INSTANCE-BASED TECHNIQUES
Lazy vs. eager generalization
k nearest neighbor
case-based reasoning
GENETIC ALGORITHMS
Different search methods for induction
EXPLANATION-BASED LEARNING
Using prior knowledge to reduce sample complexity
COMBINED INDUCTIVE/ANALYTICAL LEARNING
Symbolic methods
Neural network methods
LEARNING AGENTS
Reinforcement learning
Agent architectures for learning (e.g., Soar)
Exploration
Competitive analysis
Learning finite state environments and feature invention