Birds of a Feather Topics for AAAI Fellows Meeting
July 15-16, 2006
The purpose of the "Birds of a Feather" sessions at the AAAI Fellows meeting is to provide an opportunity for
subgroups to discuss particular AI challenges and how to achieve them,
then report back to the full group on the outcome of their
discussion. Our expectation is
that most groups will report back on a specific AI goal which they feel
is ripe for major progress, describe that goal, and explain why it is
ripe and how to pursue it. However, you may propose any topic you like.
Here are some topics already suggested.
If you would like to participate, feel free to contact the organizer, or just sign up at the meeting on July 15.
If you would like to
suggest other topics, please send Tom.Mitchell@cmu.edu a brief description
and I'll add it to the list.
-
List of topics (see description of each below)
How relevant is game theory to AI?
Organizer: Yoav Shoham.
In recent years the area of multi-agent systems has seen
significant growth. As every area matures it becomes increasingly
rigorous, and in the process draws on the most relevant established
disciplines. Single-agent theories have drawn on probability, decision
theory, and statistics, among others. The primary counterpart of these
in the multi-agent systems domain is game theory. And so it is not
surprising that one sees increased influence of game theory on
multi-agent systems. Indeed the influence is dramatic, and not confined
to AI; other areas of computer science have been as influenced (most
notably theory, but also networking and other area). The question for
AI is whether the sweeping adoption of the game theoretic framework and
analysis style is appropriate. Will the basic assumptions of perfect
mutual modeling, reasoning capabilities, selfishness -- and the focus
on equilibria as the driving solution concept (indeed, on the very
notion of a 'solution concept') -- help us build better agents
(or, for those who still care, build better computational models of
human beings)? The exchange in the forthcoming issue of AIJ on
foundations of multi-agent learning exposes some of the views in the
community (as well as within game theory), but the topic is not limited
to learning theories. My own view is that one cannot make progress in
the area of multi-agent systems without deep understanding of game
theory, but that this deep understanding also exposes some basic
limitations which make it clear that 'as is' game theory's usefulness
is limited. I believe that what is required is what might be called
"constructive game theory", which accepts game theory's insights into
interactive epistemology, but which introduces mutual modeling in a
gradual fashion, and in which equilibium analysis is only a limit
analysis which is sometimes relevant but often not. But this is a
complex issue which I don't pretend to completely understand, and a
discussion among us interested parties will be illuminating.
Can we design an architecture for human-level intelligence?
Organizer: Stuart Russell
Modern AI has developed and refined a large set of techniques over the
last 10 years, to the point that they are practically applicable in
circumscribed domains. They include: Bayesian networks,
Markov models for planning, Bayesian learning techniques, kernel
methods for learning, use of unlabeled data in learning, reinforcement
learning. These are beginning to be connected to more traditional
AI representations, generating methods for probabilistic first-order
models of representation, reasoning, planning, and learning.
The question is: can these kinds of techniques be integrated and pushed
toward a solution of the "human-level" AI problem? The goal of
the discussion will be to address this question by developing ideas for
an overall architecture for human-level intelligence, as well as
identifying major gaps in understanding or technology. The group
will aim to develop a research strategy for the next ten to twenty
years, incorporating both conceptual and institutional considerations.
How Has AI Computational Modeling Contributed to the Study of Other Domains?
Organizer: Kevin Ashley
Computational
modeling using AI techniques has long been urged as a tool for
empirically investigating issues of interest to non-AI domain experts
in a variety of fields, such as biochemistry, medicine, law, ethics,
and philosophy. AI, it was hoped, would add tools to domains already
susceptible of scientific methods, or introduce scientific
methodologies to domains that never had them. When the final history of
AI is written, it will be interesting to see how well-founded these
hopes have been. It may be intriguing for interested Fellows to discuss
the attempts that have been made so far, their successes and failures,
and even the criteria for evaluating their success. For instance, have
the results of AI investigations been published in non-AI research
journals, have they been accepted by non-AI domain experts, to what
extent have non-AI-related funders provided support for the work, etc.?
Can we design a never-ending learner to solve the natural language understanding problem?
Organizer: Tom Mitchell
I believe a reachable goal for AI this decade is to build a never-ending learner to continually improve its ability
to understand natural language, using the web and other sources for training.
If this in fact happens, it will create an "inflection point" for
AI capabilities, by making the world's largest knowledge base (the web)
understandable to computers. This breakout group will focus on
the question "How can we mount a community-wide research effort to
develop a never-ending language learning system?" and
subquestions such as "using what architecture?", "what existing technical ideas should we build
on?" and "what are arguably achievable subgoals along the way?" Much of my optimism on this
topic stems from (1) the recent burst of new algorithms for
unsupervised and lightly-supervised machine learning for natural
language processing, (2) general progress in the field of natural
language processing, and (3) the fact that we have available a corpus (the web) which has just the properties one
would want for a self-supervised language learning system (e.g., mix of
structured tables and unstructured text stating the same facts;
many different statements of the same fact on different web
pages using different linguistic forms),
What can we learn from linguistic semantics about KR&R?
Organizer: Len Schubert (perhaps this should merge with the above 'never-ending learner for NLP' session)
I think there has been an unfortunate disconnect between research on
language and research on KR&R from the beginnings of the AI field.
It is unfortunate because it seems likely that language and mind
are closely connected: language is a mirror of mind. I think that to
attain human-level AI, we need to look much more closely in that
mirror, and transfer the many insights that have been gained in the
formal study of language meaning into our work on KR&R. In
particular, all human languages have certain semantic devices that are
largely lacking in AI-style KRs. These include generalized
quantification, modification, reification, complex event reference,
genericity, and uncertainty (in various forms). These are arguably all
important for commonsense reasoning, not only for NLU. Also, recent
semantic theories for the "dynamics" of language meaning may suggest
new inference methods similar to model-elimination and SAT-like
(perhaps randomized) methods.
P.S. I don't necessarily see this as a separate topic, but perhaps as
"something we need to pay attention to if we want to succeed in
building systems that learn by reading"
How must logic be modified for representing common sense?
Organizer: John McCarthy
Languages of mathematical logic have been used to express common sense
knowledge and reasoning. Indeed this was a goal of Leibniz's
first proposals. However, nonmonotonic reasoning is certainly
needed, and I think concepts as objects and a theory of partly defined
objects are also needed. A theory of the relation between
language and reality is probably needed, but this can be done
within present logic.
Do we need a common framework for investigating architectures?
Organizer: Aaron Sloman
Many AI theorists have proposed different architectures for different
purposes ranging from relatively simple architectures for agents in
very large multi-agent systems to very complex architectures inspired
by attempts to produce individual human-like systems (E.g. Minsky's
architecture in 'The Emotion Machine' and my closely related H-Cogaff).
Perhaps we need an understanding of what varieties of purposes AI
architectures can have and which sorts of architectures are suitable
for which purposes (i.e. which niches). For this we need a language and
ontology for describing how niches can vary and, if possible, an agreed
ontology and terminology for talking about varieties of architectures,
e.g. by specifying types of components, types of representations, types
of functions components can perform, ways in which different components
can be assembled for different purposes, etc. (Compare the use of
electronic circuit diagrams: nobody supposes there is one right circuit
but there are agreed ways of talking about circuits and representing
them, and analysing their behaviours, tradeoffs, etc.)
Superficially there seems to be some common ontology in the AI
community insofar as many people use labels like 'reactive',
'deliberative', 'reflective', 'symbolic', 'subsymbolic', 'layered
architecture', 'BDI architecture', 'subsumption architecture', etc. Yet
when you look closely it turns out that some of these labels are used
in strikingly different ways by different people. E.g. some assume that
'reactive' rules out internal state changes whereas others don't. Some
use 'deliberative' to refer to anything that considers options and
makes a selection, whereas others require something richer (e.g. a
planning or problem solving capability). Some assume that an
architecture must be unchangeable, whereas others (like me) assume that
if you want to understand human intelligence you will need to consider
an infant-like architecture that grows and bootstraps itself into
something very different over an extended period.
There are also differences between amounts and types of competences
required ab-initio, as clearly demonstrated in natural systems by the
differences between precocial species like deer that need to run with
the herd very soon after birth without having time to learn much, and
altricial species born or hatched helpless and (superficially)
incompetent but somehow able to develop much richer and more varied
cognitive competences by the time they are adults, e.g. the competences
of a hunting mammal. A similar spread of designs may be required for
artificial systems, e.g. depending on how much detail can be predicted
in advance by the system designers about the application domain and
task requirements and how much has to be figured out by the system
itself on delivery or after the environment changes as a result of
unforeseen events.
There may also be very different architectural requirements depending
on how the agent interacts with its environment. E.g. an individual
with an articulated 3-D body with multiple sensors and effectors of
different sorts interacting continuously with physical structures and
processes in a dynamic and potentially dangerous environment requires
very different mechanisms from an intelligent system interacting with
and controlling a large chemical plant, or a software system
interacting with other internet agents concerned only with commercial
transactions. Are there some requirements common to all of them?
Is the diversity of niches and architectures for intelligent systems so
great that there is no point trying to develop a common framework? Or
might we gain new conceptual clarity and improved communication and
collaboration by developing such a framework? I suggest that some of
the interesting transitions in evolutionary history provide useful
clues. E.g. why and how did the ability to refer to and reason
about unperceived or future objects and events, including multi-step
futures, arise? Why and how did meta-semantic competence arise:
the ability to refer to things that refer, including coping with
referential opacity, etc. How were those related to the evolution of
linguistic communicative competence? Which other interesting
discontinuities are there?
(There's more here: http://www.cs.bham.ac.uk/research/cogaff/talks/#nokia)
How can a robot learn the foundations of commonsense knowledge from its own experience with "blooming, buzzing confusion"?
Organizer: Ben Kuipers
In 1890, William James wrote, ``{\em The baby, assailed by eyes, ears,
nose, skin and entrails at once, feels it all as one great blooming,
buzzing confusion}.'' Even so, after early childhood, we humans
describe the world primarily in terms of macroscopic objects, the
spatial relations among them, how we can act on them, and how they can
act on each other.
Current AI systems, especially robotic systems, typically have
foundational concepts of space, motion, objects, and actions programmed
in by human designers and programmers. These systems can learn
maps of individual spaces, or the properties of particular objects and
the categories that they can be organized into. But the
foundational concepts themselves come from the minds of the human
designers.
Is it possible for a robot to start from ``pixel level'' interaction
with its world, and learn high-level concepts of space, motion,
objects, and actions, without those concepts being programmed in by a
human programmer? The robot's ``pixel level'' consists of the
basic elements of a camera image, of a laser scan, of an individual
sonar return, and even the incremental steps of the motor signal.
As AI researchers, we confront this problem directly, especially if we
attempt to build intelligent robots that interact with the physical
world through their own sensors and effectors.
From a pragmatic point of view, this type of learning will become
increasingly important as robots become more complex and longer lived,
with greater varieties of sensors, and operating in environments
unfamiliar to human experience. However, the question also raises
deep issues at the foundations of artificial intelligence and the
philosophy of mind.
New Challenge Problems for Research in Heuristic Search
Organizer: Richard Korf
Heuristic Search was one of the first AI techniques, and research in
this area remains vibrant today. Much progress in this area has
come from researchers thinking about how to solve particular concrete
problems. The classic example of this is the game of chess.
In other areas, problems such as the sliding-tile puzzles, or the
N-Queens problem, have motivated a great deal of research.
Often when a new problem domain is introduced, it stresses the current
stock of algorithms in different ways, resulting in new algorithm
developments. The purpose of this session is to encourage
participants to share some of their favorite test domains for two or
multi-player games, single-agent path-finding problems, and
constraint-satisfaction problems.
Research on Integrated Systems for Human-Level Intelligence
Organizer: Pat Langley
There seems
wide agreement that AI would benefit from increased
efforts on integrated
intelligent systems, with the ultimate goal
of developing agents that exhibit
human-level behavior. However,
there are also powerful biases in place that
discourage work along
these lines. To encourage progress toward human-level
AI, we must
answer five questions:
(1) What testbeds would help
motivate research on integrated systems
and support their scientific
evaluation?
(2) How can we develop architectures for intelligent systems
that
incorporate ideas from diverse disciplines such as logic,
psychology, statistics, and linguistics?
(3) How can we obtain more
substantial - and widely distributed -
funding for research at the level
of integrated systems?
(4) What mechanisms would foster publications about
integrated systems,
which are harder to describe than component
algorithms?
(5) How can we educate future generations of AI reseachers so
they
have the knowledge needed to work toward human-level AI
systems?
We should generate tentative answers to each of these questions
for
wider discussion within the AI community. The responses should
not
favor one theoretical framework over another, since the issues we
must
address cut across different paradigms.
Promoting AI
Organizer: Eugene Freuder
There are a number of "practical" issues related to promoting our field. Of
course, the AAAI organization and others work on these all the time; but a
little brainstorming in a birds of a feather session here might be
fruitful, especially if it inspired follow on activity. Among the questions
we might address:
- How do we capture public excitement and attention?
- What are our community "infrastructure" needs and opportunities?
- What is the role of "grand challenges"?
- Can we organize "virtual laboratories" with a broad range of AI expertise
from collaborating institutions?
- Can we generate better support for international collaboration?
- Can we improve our relationship to industry?
- Can we further encourage interaction among subcommunities?
- Can we influence government funding priorities and procedures?
- What can we learn as a community from the successes and failures of the
first 50 years?
- What are the greatest challenges and opportunities facing the AI community?
- Where would we like AI as a field to be in 5 years, in 10 years, in 20?
- Do we have any recommendations for our national and international
organizing bodies?
- Are there additional activities the Fellows would like to organize
amongst themselves?
- Is there a more active role for the Fellows, as a group, to play in the
AI community?
Is it time to resurrect the orginal Shakey Robot project using currnent technology?
Organizer: Marty Tennenbaum
The goal would be to finally achieve the original ARPA deliverables
(circa 1972) of performing ill-constrained tasks such as "fetch me a
cup of coffee" or "tidy up the room". The proposed discussion would
focus on what's been learned in the past 30 years that would
justify giving it another go. Participants could include members of the
original Shakey team in attendance (e.g., Peter Hart, Harry Barrow,
Nils Nilsson, and myself) as well as current
generation roboticists (e.g., Sebastian Thrun, Rod Brooks).