One thing that happens frequently when using windowing with a rule learning algorithm is that good rules have to be discovered again and again in subsequent iterations of the windowing procedure. Although correctly learned rules will add no more examples to the current window, they have to be re-learned in the next iteration as long as the current theory is not complete and consistent with the entire training set. We have developed a new version of windowing, which tries to exploit the fact that regions of the example space that are already covered by good rules need not be further considered in subsequent iterations. Because of its technique of successively integrating learned rules into the final theory, we have named our method Integrative Windowing.