Abstract
The key to the success of probabilistic graphical models in many applications has been the use of graphs to encode conditional independencies in probability distributions. Exploiting the independencies enables compact representation of distributions and efficient inference.In applications, typically, a probabilistic graphical model of the domain is learned first. Then, for every problem instance, some set of variables with known values, such as pixel colors of an image, is instantiated in the domain model. Instantiating the evidence results in another model that represents a conditional distribution specific to the problem instance. Because the domain model is usually only an approximation of the true distribution, the problem instance-specific model is not guaranteed to approximate the corresponding conditional distribution well.
We argue for deferring the learning of the problem instance-specific graphical models until the particular problem instance is known. Our approach, called query-specific learning, fully exploits the information about the problem instance in making crucial decisions about the structure and parameters of the model. We will present one example of query-specific learning for junction trees and demonstrate empirically that it can achieve much better approximation quality than the standard approach using the same underlying class of graphical models. Finally, we will discuss the multiple open research directions motivated by the query-specific learning framework.
This is joint work with Joseph Bradley and Carlos Guestrin.
Bio
Venue, Date, and Time
Venue: NSH 1507
Date: Monday, April 21
Time: 12:00 noon