next up previous
Next: What Bounds Can Learn Up: Bound Propagation Previous: Bound Propagation

Introduction

Graphical models have become a popular and powerful tool to deal with dependencies in a complex probability model. For small networks an exact computation of the marginal probabilities for certain nodes is feasible <see, for example>zhang94a. Unfortunately, due to an exponential scaling of the computational complexity, these exact algorithms soon fail for somewhat more complex and more realistic networks.

To deal with this computational problem two main streams can be distinguished. Firstly, there is the approach of approximating the exact solution. Numerous algorithms have been developed among which are mean field peterson87a,saul96a and TAP thouless76a,plefka81a. Nowadays, approximation methods such as belief propagation murphy99a,yedidia00a using the Bethe or Kikuchi energy are gaining popularity. In contrast with mean field expansions, the latter methods try to model only the local probability distributions of several small clusters, which communicate with each other in a way known as message passing.

The second approach is that of finding upper and lower bounds on important quantities of the graphical model. In contrast to approximation techniques, these methods give reliable information. One may find, for instance, that a marginal probability is definitely between 0.4 and 0.5. These methods usually focus on the partition function of the network, which can be lower bounded mync or upper bounded wainwright02a. Little work has been done, however, on bounding the marginal probabities for certain nodes in the network directly. Up to now researchers have focused their attention on combining upper and lower bounds on the partition function in which way bounds on the marginals can be derived. For the special case of the Boltzmann machine this was done by mynips2.

In this article we present a method called `bound propagation' which has certain similarities to belief propagation in the sense that small clusters of a few nodes try to bound their marginals as tight as possible. The knowledge contained in these bounds is shared with surrounding clusters. Iteration of this procedure improves bounds on all marginals until it is converged. It is important to realize that bounding and approximation techniques are not mutually exclusive. We propose to always run the bound propagation algorithm first. When the result is not satisfactory, one can always fall back upon approximation techniques as belief propagation paying the price of being uncertain about the solution.

In Section [*] we explain the general method of bound propagation, which is described more algorithmicly in Section [*]. In Section [*] we show the behaviour of the algorithm on a toy problem, whereas in Section [*] the results are shown for more realistic networks. Finally, in Section [*], we conclude and discuss possible paths for future research.


next up previous
Next: What Bounds Can Learn Up: Bound Propagation Previous: Bound Propagation
Martijn Leisink 2003-08-18