Figure: A simple belief network for demonstrating the relationship
between Q-DAG reduction and computation caching.
The light-shaded node, , is a query node, and the
dark-shaded node, , is an evidence node.
Caching computations is another influential technique for optimizing inference in belief networks. To consider an example, suppose that we are applying the polytree algorithm to compute in the network of Figure 11. Given evidence, say , the algorithm will compute by passing the messages shown in Figure 12. If the evidence changes to , however, an algorithm employing caching will not recompute the message (which represents the causal support from to [PearlPearl1988]) since the value of this message does not depend on the evidence on .[+] This kind of optimization is typically implemented by caching the values of messages and by keeping track of which messages are affected by what evidence.
Figure 12: Message passing when C is queried and B is observed.
Now, consider the Q-DAG corresponding to this problem which is shown in Figure 13(a). The nodes enclosed in dotted lines correspond to the message from to .[+] These nodes do not have evidence-specific nodes in their ancestor set and, therefore, can never change values due to evidence changes. In fact, numeric reduction will replace each one of these nodes and its ancestors with a single node as shown in Figure 13(b).
In general, if numeric reduction is applied to a Q-DAG, one is guaranteed the following: (a) if a Q-DAG node represents a message that does not depend on evidence, that node will not be re-evaluated given evidence changes; and (b) numeric reduction will guarantee this under any Q-DAG evaluation method since it will replace the node and its ancestor set with a single root node.[+]
Figure: A Q-DAG (a) and its reduction (b).