Next: Implementation
Up: Integrative Windowing
Previous: Integrative Windowing
The algorithm shown in Figure 3 starts just like basic
windowing: it selects a random subset of the examples, learns a theory
from these examples, and tests it on the remaining examples. However,
contrary to basic windowing, it does not merely add incorrectly
classified examples to the window for the next iteration, but also
removes examples from the window if they are covered by consistent
rules. A rule is considered consistent, when it did not cover a
negative example during the testing phase. Note that this does not
necessarily mean that the rule is consistent with all examples in the
training set because it may contradict an example that has not yet
been tested at the point where MaxIncSize misclassified
examples have been found. Thus apparently consistent rules have to be
remembered and tested again in the next iteration. However, testing is
much cheaper than learning, so we expect that removing the examples
that are covered by these rules from the window should keep the window
size small and thus decrease learning time.
Figure 3:
Integrative Windowing.
|
Next: Implementation
Up: Integrative Windowing
Previous: Integrative Windowing