The Recurrent Cascade-Correlation Architecture

Scott E. Fahlman

 

Abstract

 

Recurrent Cascade-Correlation (RCC) is a recurrent version of the Cascade-Correlation learning architecture of Fahlman and Lebiere.  RCC can learn from examples to map a sequence of inputs into a desired sequence of outputs.  New hidden units with recurrent connections are added to the network one at a time, as they are needed during training.  In effect, the network builds up a finite-state machine tailored specifically for the current problem.  RCC retains the advantages of Cascade-Correlation: fast learning, good generalization, automatic construction of a near-minimal multi-layered network, and the ability to learn complex behaviors through a sequence of simple lessons.  The power of RCC is demonstrated on two tasks: learning a finite-state grammar from examples of legal strings, and learning to recognize characters in Morse code.