Time: Tuesday 3:30-4:30pm
Place: Wean Hall 5409
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4-5pm Newell-Simon 1305 |
|
|
|
|
|
|
|
|
|
|
|
09/19/00 - Daphne Koller Stanford University
Probabilistic Relational Models: Representation, Inference, & Learning
Bayesian networks are a compact and natural representation for complex probabilistic models. They use graphical notation to encode domain structure: the direct probabilistic dependencies between variables in the domain. Bayesian networks have been applied successfully in a variety of applications; they can also be learned directly from data, allowing us to automatically extract the most significant correlations in the domain. Bayesian networks, however, are an unsuitable representation for complex domains involving many entities that interact with each other. In the first part of the the talk, I will describe probabilistic relational models (PRMs), which extend the language of Bayesian networks with the expressive power of object-relational languages. A PRM models the uncertainty over the attributes of objects in the domain and uncertainty over the relations between the objects. This language is capable of compactly modeling domains that are substantially more complex than ones for which standard Bayesian networks are appropriate. The same structure that allows compact representation can also support substantially faster inference than in equivalent Bayesian network models. In the second part of the talk, I will show how we can use probabilistic relational models to learn the probabilistic dependency structure in a relational domain, using a relational database as our starting point. I will discuss applications of this new learning technology to various domains, including complex biological data sets.
Joint work with: Nir Friedman, Lise Getoor, Avi Pfeffer, and Ben Taskar.
Daphne
Koller received her PhD from Stanford University in 1994. After a
two-year postdoc at Berkeley, she returned to Stanford, where she is now
an Assistant Professor in the Computer Science Department. She has a broad
range of interests spanning artificial intelligence, economics, and theoretical
computer science. Her main research interest is in creating large-scale
systems that reason and act under uncertainty. The theme underlying her
work is the integration of ideas from decision theory and economics into
these systems. This task raises the need for compact and natural knowledge
representation schemes and for efficient inference and learning algorithms
that utilize these schemes. Daphne Koller is the author of over 45 refereed
publications, which have appeared in AI, theoretical computer science,
and economics venues. She is on the editorial board of the Journal of Artificial
Intelligence Research. She was awarded the Arthur L. Samuel Award for her
PhD thesis work and the Sloan Foundation Faculty Fellowship in 1996.
02/27/01 - Jeffrey Mark Siskind NEC Research Institute, Inc.
Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic
In this talk, I will present an implemented system, called Leonard, that classifies simple spatial motion events, such as `pick up' and `put down', from video input. Unlike previous systems that classify events based on their motion profile, Leonard uses changes in the state of force-dynamic relations, such as support, contact, and attachment, to distinguish between event types. Since force-dynamic relations are not visible, Leonard must construct interpretations of its visual input that are consistent with a physical theory of the world. Leonard models the physics of the world via kinematic stability analysis and performs model reconstruction via prioritized circumscription over this analysis. In this talk, I will present an overview of the entire system, along with the details of both the model reconstruction process and the subsequent event-logic inference algorithm that can infer occurrences of compound events from occurrences of primitive events. This inference algorithm uses a novel representation, called spanning intervals, to give a concise representation of the large interval sets that occur when representing liquid and semi-liquid events. I will illustrate how Leonard handles a variety of complex visual-input scenarios that cannot be handled by approaches that are based on motion profile, including extraneous object in the field of view, sequential and simultaneous event occurrences, and non-occurrence of events. I will also present a live example illustrating the end-to-end performance of Leonard classifying an event from video input.
Jeffrey Mark Siskind
Ph.D., MIT, Electrical Engineering and Computer Science, 1992.
Jeff joined NECI in 1997 after a postdoctoral position at the University of Pennsylvania, and faculty positions at the University of Toronto, the Technion, and the University of Vermont. His research straddles cognitive science and computer systems. He currently focuses on three areas: visual event perception, computational modeling of child language acquisition, and optimizing compilers for mostly-functional programming languages.
12/05/00 - Michael Collins AT&T Labs-Research
Statistical Models for Natural Language Parsing
This talk will discuss the problem of machine learning applied to natural language parsing: given a set of example sentence/tree pairs, the task is to learn a function from sentences to trees which generalizes well to new sentences.
In the first part of the talk I will review recent work on probabilistic, history-based approaches. Much of the recent success of these methods has been due to the incorporation of lexically conditioned parameters. I will discuss the importance of head words and dependency parameters, and also the use of estimation methods such as decision trees or maximum entropy methods.
While history-based models have several advantages, it can be awkward to encode some constraints within this framework. It is often easy to think of features which might be useful in discriminating between candidate trees for a sentence, but much more difficult to alter the model to take these features into account. In the second part of the talk I will review more recent work on learning methods which promise to be considerably more flexible in incorporating features. I will discuss how three such approaches -- boosting, support vector machines and markov random fields -- can be applied to parsing, and the similarities and relative advantages of the three approaches.
Michael Collins
did his undergraduate studies in Electrical
Engineering at Cambridge University, and went on to do a Masters in
Speech and Language Processing, also at Cambridge. He received his PhD
from University of Pennsylvania in 1998, and has been at AT&T
labs-research since January 1999, most recently in the AI
department. His research interests are in machine-learning approaches
to natural language processing.
01/30/01 - Christopher Raphael Dept of Math and Statistics, U Mass, Amherst.
Can the Computer Learn to Play Music Expressively?
I will discuss my recent work in creating a computer program that plays the role of a sensitive musical accompanist in a non-improvisatory piece for soloist and accompaniment. An accompanist must synthesize a number of different sources of information: The accompanist must follow the presciption of the musical score, must "listen to" the soloist, and must learn from rehearsals, all while obeying an internal sense of musicality. During live performance, my accompaniment system combines these sources of information into to a coherent probabilistic model, a Bayesian belief network, from which it can deduce the optimal course of action in real time. I will provide a demonstration.
Christopher Raphael is currently an Assistant Professor in the Department of Mathematics and Statistics at the University of Massachusetts, Amherst where his interests include Bayesian belief networks and hidden Markov models and their applications to various recognition problems. Prior to his current appointment he was a Research Scientist in the Speech and Natural Language Processing group at BBN working in optical character recognition.
As a winner of the San Francisco Symphony Young Artist Competition,
he soloed with the San Francisco Symphony in 1978 and has held a fellowship
at Tanglewood. He played principal oboe in the Santa Cruz Symphony
from 1982 to 1985 and soloed with that orchestra on several occasions.
04/04/01 - Makoto Yokoo NTT Communication Science Labs, JAPAN
Theory of Internet Auctions: Effect of False-name Bids on Auction Protocols
In this talk, we examine the effect of a new type of frauds called false-name bids, which can be a serious problem in Internet auctions. False-name bids are bids submitted by a single agent under multiple fictitious names (e.g., multiple e-mail addresses). If there exists no false-name bid, the Generalized Vickrey Auction protocol (GVA) satisfies three desirable properties, i.e., individual rationality, Pareto efficiency, and incentive compatibility. On the other hand, when false-name bids are possible, we prove that it is theoretically impossible for an auction protocol to simultaneously satisfy these three properties. We develop a new auction protocol called Leveled Division Set (LDS) protocol, which satisfies individual rationality and incentive compatibility.
Makoto Yokoo is currently a distinguished technical member in NTT
Communication Science Laboratories, Kyoto, Japan. His research
interests include multi-agent systems, search/constraint satisfaction,
and mechanism design among agents. He recently published a book
"Distributed Constraint Satisfaction: Foundation of Cooperation in
Multi-Agent Systems" from Springer. He is a board member of
International Foundation of Multi-agent Systems (IFMAS), which is the
host organization of International Conference on Multi-agent systems
(ICMAS). He is also editorial board members and program committee
members of various international journals and conferences, including
Journal on Artificial intelligence Research, IJCAI.
Webpage maintained by Stella X. Yu