Faster-Learning Variations on Back-Propagation:
An Empirical Study

Scott E. Fahlman

 

Abstract

 

Most connectionist or "neural network" learning systems use some form of the back propagation algorithm.  However, back-propagation learning is too slow for many applications, and it scales up poorly as tasks become larger and more complex.  The factors governing learning speed are poorly understood.  I have begun a systematic, empirical study of learning speed in backprop-like algorithms, measured against a variety of benchmark problems.  The goal is twofold: to develop faster learning algorithms and to contribute to the development of a methodology that will be of value in future studies of this kind.

 

This paper is a progress report describing the results obtained during the first six months of this study.  To date I have looked only at a limited set of benchmark problems, but the results on these are encouraging: I have developed a new learning algorithm, Quickprop, that is faster than standard backprop by an order of magnitude or more and that appears to scale up very well as the problem size increases.