Time: Tuesday 3:30-4:30pm
Place: Wean Hall 5409
Organizer: Dr. Tuomas Sandholm (assistant: Phyllis Pomerantz plp@cs.cmu.edu)
Date |
Speaker |
Affiliation |
Topic |
10/16/01 |
Sven Koenig |
Georgia Tech |
Greedy On-Line Planning Note: Jean Harpley (jean@cs.cmu.edu) will coordinate Svens schedule. |
10/23/01 |
Rakesh Vohra |
Northwestern U., MEDS |
|
10/30/01 |
Latanya Sweeney |
CMU (Heinz & SCS) |
|
11/6/01 |
Craig Boutilier |
U. Toronto, Computer Science |
|
11/13/01 |
Tom Dietterich |
Oregon State University, Computer Science |
|
11/26/01 NSH 1507, 10:00 am |
Ben Kuipers |
UT Austin, Dept of Computer Sciences |
Learning the Cognitive Map and its Foundations. Note the exceptional time and place of the talk. Bens schedule will be coordinated by Illah Nourbakhsh (illah@ri.cmu.edu) |
12/11/01, Wean 4601, 10:30-11:30 am |
Lynne Parker |
Oak Ridge National Laboratory |
Towards Cooperative Robot Teams in Complex Site Preparation Tasks. Note: Monica Hopes (meh@ri.cmu.edu) will coordinate the schedule |
2/19/02 |
Dov Samet |
Hebrew University |
|
3/12/02 |
Tristan Cazenave |
University of Paris 8, Computer Science |
Search algorithms for computer Go & Go playing session |
3/19/02 |
Diane Litman |
University of Pittsburgh |
Learning, Adaptation, and Personalization in Spoken Dialogue Systems (Ariadna Font Llitjos aria+@cs.cmu.edu will coordinate the schedule) |
4/23/02 |
Tom Mitchell |
CMU School of Computer Science |
|
4/30/02 |
Vincent Conitzer & Cuihong Li |
CMU, Computer Science Department & CMU RI, GSIA |
Complexity of Manipulating Elections with
Few Candidates. (30 minute
practice run of a talk to be given in the oral presentation track at
AAAI-02. Paper by Conitzer &
Sandholm). Also: Algorithm for Combinatorial Coalition
Formation and Payoff
Division in an Electronic Marketplace. (30 minute practice run of a talk to be
given at AAMAS-02. Paper by Li &
Sycara). |
5/7/02 |
Manuela Veloso |
CMU Computer Science Department |
Multi-Robot Team Coordination and Learning in Adversarial Environments |
10/16/01 Sven Koenig Georgia Institute of Technology, College of Computing
Autonomous agents must be able to make good decisions in complex situations that involve a substantial degree of uncertainty, yet find solutions in a timely manner despite a large number of potential contingencies. Examples include mobile robots and decision-support systems for crisis situations.
In this talk, I will describe and analyze greedy on-line planning methods for such agents. These methods are based on two principles. One principle is to restrict the search to a small neighborhood of the current state of the agent, resulting in tractable planning methods that do not need to be in control of the agents at all times and are thus easy to integrate into complete agent architectures. The other principle is to perform incremental heuristic searches, resulting in planning methods that can solve several similar planning tasks faster than can be done by repeatedly planning from scratch. I will talk about algorithms, their analysis (including complexity results), and their integration into complete agent architectures, using robot mapping and localization tasks as examples. If time permits, I will also apply some of the insights to reinforcement learning and discuss why many reinforcement learning methods do not do a good job at selecting actions during learning.
This is joint work with Craig Tovey, Maxim Likhachev, and David Furcy.
Note: Jean Harpley (jean@cs.cmu.edu) will coordinate Svens schedule.
Sven Koenig graduated from Carnegie Mellon University in 1997 and is now an assistant professor at the College of Computing of Georgia Tech. His research centers around techniques for decision making that enable situated agents to act intelligently in their environments and exhibit goal-directed behavior in real-time, even if they have only incomplete knowledge of their environment, limited or noisy perception, imperfect abilities to manipulate it, or insufficient reasoning speed. More information can be found at http://www.cc.gatech.edu/fac/Sven.Koenig/
10/23/01 Rakesh Vohra Northwestern University, Managerial Economics and
Decision Science (MEDS)
The Vickrey sealed bid auction occupies a central place in auction theory because of its efficiency and incentive properties. Implementing the auction requires the auctioneer to solve n+1 optimization problems, where n is the number of bidders. In this talk I survey various environments (some old and some new) where the payments bidders make under the Vickrey auction correspond to dual variables in certain linear programs. Thus, in these environments, at most two optimization problems must be solved to determine the Vickrey outcome. Furthermore, primal-dual algorithms for some of these linear programs suggest ascending auctions that implement the Vickrey outcome.
This talk assumes no prior
knowledge of auction theory. It is based on joint work with Sushil Bikhchandani, Sven de Vries and James
Schummer.
John L. and Helen Kellogg
Professor of
Managerial Economics and
Decision Science (MEDS)
Kellogg School of Management
Northwestern University
Home page: http://www.kellogg.nwu.edu/faculty/vohra/htm/vohra.htm
Paper at http://www.kellogg.nwu.edu/faculty/schummer/ftp/research/vicklp.pdf
10/30/01
Latanya Sweeney CMU
(Heinz School of Public Policy and School of Computer Science)
Society is experiencing
exponential growth in the number and variety of
data collected on
individuals. This happens at a time when more and more
historically public
information is also electronically
available. When
these data are linked
together, they provide an electronic shadow of a
person or organization that
is as identifying and personal as a fingerprint
even when the information
contains no explicit identifiers, such as name
and phone number. Other distinctive data, such as birth date
and ZIP code,
often combine uniquely and
can be linked to publicly available information
to re-identify individuals.
The result provides widespread access to
strategic and sensitive
information about the lifestyles, health, and
behaviors of people. In this
talk, I will examine strategies for learning
sensitive and strategic information
about individuals from disparate pieces
of information and will
examine data surveillance as a terrorism and as a
counterterrorism
weapon. Data privacy is an emerging
area of computer
science for studying
computational solutions for sharing person-specific
data such that data remains
practically useful while also providing
guarantees of anonymity.
This talk examines how the development of data
privacy solutions not only
effects societal policies and practices, but
also enhances the knowledge
of computer science as well. This talk ends
with discussion about the implications of having
so much person-specific
data collected and shared on
the future of terms like liberty, freedom and
privacy.
Latanya Sweeney recently graduated
with a Ph.D. in computer science from
MIT (being the first black
woman to do so). She is now an Assistant
Professor of Computer
Science and of Public Policy here in CALD and the
Heinz School. At CMU, she has also started the Laboratory
for
International Data Privacy
which works on real-world data sharing problems
with stakeholders. Her work
on data privacy has received numerous awards
from various disciplines
including the Patient Advocacy Award by the
American Psychiatric
Association and First Prize by the American Medical
Informatics Association. She
has been invited to speak around the world and
before U.S. Senate
committees. More information can be found at
http://sos.heinz.cmu.edu/dataprivacy/
and http://sweeney.heinz.cmu.edu/.
11/6/01 Craig
Boutilier
University
of Toronto, Dept of Computer Science
Combinatorial auctions
provide a valuable mechanism for the
allocation of goods in
settings where buyer valuations exhibit
complex structure with
respect to substitutability and
complementarity. Most algorithms are designed to work with
explicit bids for concrete
bundles of goods. However, logical
bidding languages allow the
expression of complex utility
functions in a natural and
concise way.
In this talk, I introduce a
new, generalized language where bids
are given by propositional
formulae whose subformulae can be
annotated with prices. This
language allows bidder utilities to
be formulated more naturally
and (in some cases, exponentially)
more concisely than existing
languages. I will also describe
the computational advantages
of using this language in winner
determination, both from the
perspective of integer programming
and stochastic local search.
This talk describes joint
work with Holger Hoos.
http://www.cs.toronto.edu/~cebly
11/13/01 Tom Dietterich Oregon State University, Computer Science
This talk will
address two aspects of value function approximation for
reinforcement
learning (RL). First, most online RL
algorithms work by
incrementally
solving the Bellman equation. In large
RL problems,
function
approximators must be employed to approximate the value
function. These approximations typically prevent the
Bellman equation
from being
satisfied. The first question addressed
in this talk is
whether the
Bellman equation is still useful for large-scale RL.
Second, most
online function approximators have a fixed number of
basis
functions (and hence, a fixed complexity or VC dimension). In
supervised
learning, we know that it is important to adapt the
complexity
of the function approximator to the complexity of the
function
being approximated. The second question
addressed in this
talk is how
to automatically tune function approximator complexity for
RL.
We will
present three formulations of the value function approximation
problem
using techniques inspired by support vector machines. These
three formulations
are all batch-incremental algorithms that can tune
the
complexity of the function approximator (as measured by the number
of strength
of the support vectors) to the complexity of the value
function. One formulation is based on supervised regression,
a second
formulation
is based on the Bellman equation, and the third
formulation
is based on Leeman Baird's advantage learning.
Experimentally,
we show that all three of these formulations work
well, but that
the Bellman formulation gives slightly better
performance
and is more reliable. Hence, we
conclude that the Bellman
equation is
still useful even with value function approximation.
Tom
Dietterich is Professor of Computer Science at Oregon State
University,
where he has worked on many aspects of machine learning
since
completing his PhD at Stanford in 1984.
His contributions
include the
method of error-correcting output codes for converting
multi-class
classification problems into binary classification
problems;
the formulation and solution of multiple-instance learning
with
application to drug activity preduction; the development of the
MAXQ
formalism for hierarchical reinforcement learning; an approach to
applying
reinforcement learning to discover search control heuristics
for job-shop
scheduling; and the development of statistical tests and
other
methodological tools for machine learning research. Working
with his
graduate students, he is currently studying cost-sensitive
supervised
learning, spatio-temporal learning, and reinforcement
learning. He has also served as Executive Editor of
Machine Learning
(1991-1998),
Founding Action Editor of the Journal of Machine Learning
Research
(2000-present), Program Chair and General Chair of the Neural
Information
Processing Systems Conference (NIPS-2000 and NIPS-2001),
and Program
Co-Chair of the National Conference on Artificial
Intelligence
(AAAI-1990). He was elected a Fellow of
the AAAI in
1994.
11/26/01 Ben Kuipers UT Austin, Computer Science
10:00am, NSH 1507
William James [1890] wrote, ``The baby, assailed by eyes, ears, nose,
skin, and entrails at once, feels it all as one great blooming,
buzzing confusion.'' Similarly, we imagine a robot born into an
unknown environment with an unknown set of sensors and effectors. How
can it first learn the properties of its sensorimotor system, and then
learn a useful cognitive map of its world?
Our Spatial Semantic Hierarchy [Kuipers, AIJ, 2000] provides the
target for this learning process. The SSH is a hierarchy of different
representations for knowledge of space, with different expressive and
inferential capabilities. The control level defines continuous
control laws linking locally distinctive states. These patterns of
reliable continuous behavior are abstracted to causal schemas in which
states are linked by discrete actions, supporting the creation of
symbolic causal and topological maps.
The goal of our learning process is the identification of a reliable
set of perceptual features and primitive motor commands that can
support the definition of trajectory-following and hill-climbing
control laws. Once we can define the SSH control level, the rest of
the cognitive map can be built on that foundation. I will describe
work that solves this problem for a simple simulated robot, and
current directions of research with physical robots in real
environments.
Benjamin Kuipers is Professor of Computer Sciences at the University
of Texas at Austin. He investigates the representation of commonsense
and expert knowledge, with particular emphasis on the effective use of
incomplete knowledge. He received the B.A. in Mathematics from
Swarthmore College, and the Ph.D. in Mathematics from MIT. He has
held research or faculty appointments at MIT, Tufts University, and
the University of Texas. His research accomplishments include
developing the TOUR model of spatial knowledge in the cognitive map,
the QSIM algorithm for qualitative simulation, Access-Limited Logic
for knowledge representation, and the Spatial Semantic Hierarchy model
of knowledge for robot exploration and mapping. He served as
Department Chairman 1997-2001, and is a Fellow of AAAI and IEEE.
12/11/01 Lynne Parker Computer Science and Mathematics Division at Oak Ridge National
Laboratory
10:30am, Wean Hall 4601
The application of robot teams to tasks requiring terrain
alteration is very challenging, due to the unpredictable nature of the
robot-terrain interaction, as well as the robot-robot interaction.
Two examples of terrain alteration tasks, also known as site preparation tasks,
are surface coal mining and planetary site preparation for human missions to
Mars. These tasks require the terrain surface to be altered or smoothed
to obtain a specified profile. The complete development of multi-robot
solutions to these complex tasks requires addressing a number of issues in
multi-robot control, including cooperative task allocation, 3D multi-robot path
planning, multi-robot localization, autonomous cooperative elevation map generation,
robot team behaviors for terrain alteration, and techniques for human control
of multi-robot teams. To date, little research has addressed the ability of
robot teams to solve these types of complex missions involving many autonomous
control issues. Our objective is to develop a complete multi-robot system
that can solve these types of integrated cooperative tasks involving a number
of autonomous control techniques in challenging application domains. In
this talk, I will discuss our progress to date towards reaching this
objective. I will describe a number of the individual techniques we are
developing to address each of these cooperative control issues as well as the
overall framework for the integration of these techniques. I will present
preliminary results of aspects of this research in simulation as well as on
physical robot teams.
Dr. Lynne E. Parker is a Distinguished Research and Development Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory . Dr. Parker received her Ph.D. degree in computer science in 1994 from the Massachusetts Institute of Technology (MIT), from the Artificial Intelligence Laboratory. Her research is focused on the development and implementation of robotic control architectures that facilitate fault tolerant, cooperative control and learning in multi-robot teams. For this research, she was awarded the 1999 DOE Office of Science Early Career Scientist Award, and the 1999 U.S. Presidential Early Career Award for Scientists and Engineers. She also received a 2000 UT-Battelle Technical Achievement Award for Significant Research Accomplishments. Dr. Parker is an active leader in her field, and is a frequent invited speaker at international conferences, workshops, and universities. She is the Program Vice-Chair for the Americas for the 2002 IEEE International Conference on Robotics and Automation. She is also guest co-editing a special issue of IEEE Transactions on Robotics and Automation on the topic of Multi-Robot Systems, to appear in 2002. She and Tucker Balch have edited a book to be published in early 2002 entitled Robot Teams: From Diversity to Polymorphism. She is a member of IEEE, AAAI, ACM, and Sigma Xi.
http://saturn.csm.ornl.gov/~parkerle
2/19/02
Dov Samet Hebrew University
A valuation for a player in a game in extensive form is an
assignment of numeric values to the players moves. The valuation
reflects the desirability moves. We assume a myopic player, who
chooses a move with the highest valuation. Valuations can also be
revised, and hopefully improved, after each play of the game.
Here, a very simple valuation revision is considered, in which
the moves made in a play are assigned the payoff obtained in the
play. We show that by adopting such a learning process a player
who has a winning strategy in a win-lose game can almost surely
guarantee a win in a repeated game. When a player has more than
two payoffs, a more elaborate learning procedure is required. We
consider one that associates with each move the average payoff in
the rounds in which this move was made. When all players adopt
this learning procedure, with some perturbations, then, with
probability 1, strategies that are close to subgame perfect
equilibrium are played after some time. A single player who adopts
this procedure can guarantee only her individually rational
payoff.
Joint work with P. Jehiel.
Paper available at www.tau.ac.il/~samet.
Professor Samet is the incumbent of the Louise Lea Flack
Chair in Game Theory and Interactive Decisions. He
is a Fellow of the Econometric Society and a member
of the editorial boards of International Journal of
Game Theory, and Games and Economic Behavior.
3/12/02 Tristan Cazenave University of Paris 8, Computer Science Department
Search algorithms for computer Go
Programming a computer to play the game of Go is much harder than
programming it to play other classical games. The large branching factor
on a 19x19 board (250 possible moves on average) prevents usual search
algorithms from reading deeply. Moreover, there is no simple evaluation
function. The evaluation of a position depends on deep tactical reading,
on heuristic knowledge and on the difficult analysis of the relations
between the groups of stones. In this talk we present some search algorithms
that are much more effective than basic search algorithms for solving tactical
Go problems. They are much more selective than usual Alpha-Beta, they are more
reliable than the other selective search algorithms, and they only rely
on simple game dependent knowledge. They also work in other games than
the game of Go.
Tristan Cazenave is an associate professor of computer science at the
University of Paris 8. He received his Ph.D. in computer science from
Paris 6 University in 1996. His research is in the areas of search and
learning in computer games, and problem solving. He started programming
games at 15, and could never stop. He is the author of the Go program
Golois.
Home page: http://www.ai.univ-paris8.fr/~cazenave/
After the talk, Tuomas Sandholm will host a Go playing session in the CS lounge (5-7pm). Professor Cazenave will be available to play.
3/19/02
Diane Litman University of
Pittsburgh, Dept. of Computer Science and Learning Research and Development
Center
Learning, adaptation, and
personalization are topics of current
research interest in spoken
dialogue systems. Applications range
from
the use of reinforcement
learning to optimize low-level dialogue
strategy parameters, the use
of rule induction to trigger system
adaptation of more global
dialogue strategies, and the use of
user-controlled adaptation
to personalize and optimize the system.
In
this talk, I will describe
the use of learning and personalization in
two rather different
dialogue systems. Our NJFun system
provides
spoken telephone access to a
database of things to do in NJ, and uses
reinforcement learning to
tune dialogue strategy parameters in
response to dialogue data
obtained in a controlled user study.
Our
CobotDS system provides
spoken telephone access to the internet chat
environment LambdaMOO, and
offers features such as personalized
grammars for speech
recognition and personalized summarization.
Together, these systems
offer a case study in the many issues that
arise in making learning and
personalization effective and acceptable
to users of spoken dialogue
systems.
Diane Litman joined the University of Pittsburgh in Fall 2001, as both an
Associate Professor of Computer Science, and a Research Scientist with the
Learning Research and Development Center (LRDC). Diane moved here from
New Jersey, where from 1985-2001 she was a member of the Artificial
Intelligence Principles Research Department, AT&T Labs - Research
(formerly Bell Laboratories). From 1990-1992, she was also an Assistant
Professor of Computer Science at Columbia University. Diane received her
Ph.D. and M.S. in Computer Science from the University of Rochester, and
her A.B. in Mathematics and Computer Science from the College of William
and Mary. Diane's research is in the area of artificial intelligence, and
includes contributions in the areas of computational linguistics,
knowledge representation and reasoning, natural language learning, plan
recognition, spoken language, and user modeling.
http://www.research.att.com/~diane/
http://www2.cs.pitt.edu/~litman
4/23/02
Tom Mitchell CMU, Computer
Science
The study of the human brain
is undergoing a major revolution due to the recent
invention of new, highly
precise techniques for measuring human and animal
brain activity. For example,
functional Magnetic Resonance Imaging (fMRI) now
provides scientists a safe,
non-invasive means of producing a three-dimensional
"movie" of human
brain activity with a spatial resolution of 3mm, and a
temporal resolution of 500
milliseconds. As a result, scientists are able for
the first time to see the
detailed patterns of cortical activity that
constitute human cognitive
processes such as language processing, vision,
memory and problem solving.
This talk will provide both a tutorial on fMRI
brain imaging and typical
experimental results, and will examine the
significant role that
artificial intelligence and computer science can play in
the coming revolution in
brain science.
Tom M. Mitchell is the
Fredkin Professor of Computer Science at Carnegie Mellon
University. He is President of the American Association
of Artificial
Intelligence (AAAI), author
of the textbook "Machine Learning," and a member of
the National Research
Council's Computer Science and Telecommunications Board.
During 1999-2000 he served
as Vice President and Chief Scientist at WhizBang!
Labs, a company that employs
machine learning to extract information from the
web. Mitchell is Director of CMU's Center for
Automated Learning and
Discovery, an
interdisciplinary research center specializing in statistical
machine learning and data mining. His web address is www.cs.cmu.edu/~tom.
4/30/02,
3:30-4pm Vincent Conitzer CMU,
Computer Science
In multiagent settings where
the agents have different
preferences, preference
aggregation is a central issue. Voting
is a
general method for
preference aggregation, but seminal results have shown
that all general voting
protocols are manipulable. One way to
avoid
manipulation is by using
voting protocols where determining a beneficial
manipulation is
computationally hard. Some earlier work
has been done in
this area, but it was
assumed that the numbers of both voters and
candidates are
unbounded. In this talk, I will present
our new hardness
results for manipulation in
more practical voting settings where the
number of candidates is
small but the number of voters can be large. These
results can be used to
differentiate voting protocols on the basis of the
difficulty of manipulating
them.
Exceptionally,
this talk will only last 30 minutes. It
is a practice run of a talk to be given in the oral presentation track of the
National Conference on Artificial Intelligence (AAAI-02). Paper: Complexity of Manipulating Elections
with Few Candidates, by Vincent Conitzer and Tuomas Sandholm.
Vincent Conitzer is a Ph.D.
student in Tuomas Sandholm's
Agent-Mediated Electronic
Marketplaces Lab at Carnegie Mellon's Computer
Science Department. His interests include computational aspects
of issues
in economics as well as strategic issues in computer science.
4/30/02,
4-4:30pm Cuihong Li CMU, RI
& GSIA
In an electronic
marketplace, coalition formation allows buyers
to enjoy a price discount
for each item, and combinatorial auction enables
buyers to place bids for a
bundle of items that are complementary.
Coalition formation and
combinatorial auctions both help to improve the
efficiency of a market, and
they have received much attention from
economists and computer
scientists. But there has not been work
studying
the situations where both
coalition formation and combinatorial auctions
exist. In this paper we consider an e-market where
each buyer places a
bid on a combination of
items with a reservation cost, and sellers offer
price discounts for each
item based on volumes. By artificially
dividing
the reservation cost of each
buyer among the items, we can construct
optimal coalitions with
respect to each item. These coalitions
satisfy
the complementarity of the
items by reservation cost transfers, and thus
induce the optimal
solution. We focus on the systems with
linear price
functions and present a
polynomial-time algorithm to find a semi-optimal
solution and a payoff
division scheme that is in the core of the
coalition. Simulation results show that the algorithm
obtains a solution
close to the optimal value.
Exceptionally,
this talk will only last 30 minutes. It
is a practice run of a talk to be given in the AAMAS-02 conference. Paper by Cuihong Li and Katia Sycara.
Cuihong Li is currently a research assistant in Dr. Katia Sycara's Advanced Agent Technology Lab in Robotics Institute. Bio: 1993-1998, BA in Automation, Dept. of Automation, Tsinghua University, China. 1998-2000, MS in Systems Engineering, Dept. of Automation, Tsinghua University, China. 2000-present, Doctoral student in Management of Automation and Manufacturing (Joint program between GSIA and Robotics Institute), GSIA, Carnegie Mellon University. Research interest: Information Economics, Agent-Mediated Commerce, Supply Chain Management.
5/7/02
Manuela Veloso CMU, Computer Science Department
My long-term research
passion is the study of complete autonomous
intelligent robots that can
continuously perceive the world, act,
achieve goals in dynamic and
uncertain environments, and learn to
improve their performance.
Creating such effective robots, in
particular as members of a
team in the presence of opponents, is a
challenging problem.
Robotic soccer has offered
an interesting concrete environment for
research in multiagent
planning, execution, and learning. With my
students, we have been
pursuing research in robotic soccer in three
different technical setups:
fully distributed multiagent simulation,
small wheeled robots with
centralized perception, and fully autonomous
Sony legged robots. We have
participated in the RoboCup international
competitions since 1997.
In this talk, I will do a
short historical overview of the research
contributions and
performance of our teams. I will then focus on some
of our recent multi-robot
coordination and learning algorithms
specifically aimed at responding
to adversaries. I will present some
of our underlying main
research contributions, including different
multi-robot behaviors, a
real-time path planning and replanning
algorithm, and a variable
learning rate multiagent learning
algorithm. I will further
discuss the role of coaching in multi-robot
systems. I will concluce setting our multi-robot
research goals in
perspective and discussing
some of the fascinating open questions to
be addressed towards
creating truly robust multi-robot teams.
Manuela Veloso is Associate
Professor in the School of Computer
Science Department at
Carnegie Mellon University. She
received her
Ph.D. in Computer Science
from Carnegie Mellon University in 1992. A
native of Portugal, she
received a B.S. in Electrical Engineering in
1980 and an M.Sc. in
Electrical and Computer Engineering in 1984 from
the Instituto Superior
Tecnico in Lisbon. Veloso researches in
the
area of artificial
intelligence. Her long-term research goal is the
effective construction of
intelligent agents where cognition,
perception, and action are
combined to autonomously address planning,
execution, and learning
tasks. She has developed robotic soccer teams
that have participated in
the RoboCup international competitions in
three different categories,
namely simulation software agents,
small-wheeled robots, and
Sony four-legged robots. Veloso
received an
NSF Career Award in 1995 and
the Allen Newell Medal for Excellence in
Research in 1997. She is the
author of one book on "Planning by
Analogical Reasoning",
editor of several other books, and the author
of over 100 technical
journal and conference papers. Veloso
is the
Vice-President of the
RoboCup International Federation and was the
General Chair for RoboCup-2001,
held in Seattle, August 2001. More
details are available at
http://www.cs.cmu.edu/~mmv/