Integrating Initialization Bias and Search Bias
in Neural Network Learning
Joseph O'Sullivan Carnegie Mellon University
The use of previously learned knowledge during learning has been shown
to reduce the number of examples required for good generalization, and
to increase robustness to noise in the examples.
In reviewing various means of using learned knowledge from
a domain to guide further learning in the same domain,
two underlying classes are discerned.
Methods which use
previous knowledge to initialize a learner (as an
initialization bias), and those that use previous knowledge to
constrain a learner (as a search bias).
We show such methods in fact exploit the same domain knowledge differently,
and can complement each other.
This is shown by presenting a combined approach which both
initializes and constrains a learner.
This combined approach is seen to outperform the individual methods under the
conditions
that accurate previously learned domain knowledge is available, and
that there are irrelevant features in the domain representation.