Log of Past CMU Reinforcement Learning Talks
1992-93
Carnegie Mellon University
School of Computer Science
REINFORCEMENT LEARNING SEMINAR SERIES 1992-93
Organizer: Justin.Boyan@cs.cmu.edu
Sep 25 Sebastian Thrun
Explanation-based Neural Networks for Robot Control
Oct 2 Long-Ji Lin
Practice Thesis Defense
Oct 9 Scott Fahlman
The Cascade algorithms for fast continuous function learning
Oct 23 Lonnie Chrisman
Causal Differencing: Explicit-Bias Q-Learning (EBQ)
Oct 30 Michael Littman
A classification of reinforcement learning environments
Nov 6 Rich Caruana
Multi-task learning: thoughts and results
Nov 10 Gerry Tesauro (IBM)
Practical Experiences in TD Learning
Nov 13 Jan Zytkow
Creative response by combining simple experiences
Dec 15 Long-Ji Lin
Thesis Defense
Jan 25 Michael Littman (& Dave Ackley, video)
Evolutionary Reinforcement Learning &
Distributed Lamarckian Evolution
Feb 8 Erik Ydstie
Inverse Adaptive Control Using Connectionist Networks
Feb 24 Andrew Moore
Faculty candidate talk -- Memory-Based Learning for Control
Mar 15 Avrim Blum
Efficient path planning in unfamiliar geometric terrain
Mar 25 Mark Ring (Texas)
Hierarchical Learning
Apr 9 Sebastian Thrun
Problems with Function Approximation for Q-Learning
Apr 12 Joseph O'Sullivan
Reinforcement Learning with Vision for the Xavier robot
Apr 19 Lonnie Chrisman
Representing and Reasoning about Modeling Limitations
Apr 26 Sven Koenig
Complexity Analysis of Reinforcement Learning
May 3 Gregory Karakoulas (National Research Council, Canada)
Reinforcement Learning in Continuous State and Action Spaces
May 17 Geoff Gordon
Continuous Q-functions are (sort of) PAC learnable
May 27 Justin Boyan
A Distributed RL Scheme for Packet Routing
Jun 3 Ari Juels (Berkeley)
Rethinking the Genetic Algorithm
Jun 17 Lonnie Chrisman and Michael Littman
ML93 Preview: RL in Environments with Hidden State
1993-94
1994-95
1995-96
1996-97
1997-98
Back to group home page