next up previous
Next: Introduction

Neural Programming and an Internal Reinforcement Policy

Astro Teller and Manuela Veloso

Carnegie Mellon University, Pittsburgh PA 15213, USA

Abstract:

An important reason for the continued popularity of Artificial Neural Networks (ANNs) in the machine learning community is that the gradient-descent backpropagation procedure gives ANNs a locally optimal change procedure and, in addition, a framework for understanding the ANN learning performance. Genetic programming (GP) is also a successful evolutionary learning technique that provides powerful parameterized primitive constructs. Unlike ANNs, though, GP does not have such a principled procedure for changing parts of the learned system based on its current performance. This paper introduces Neural Programming, a connectionist representation for evolving programs that maintains the benefits of GP. The connectionist model of Neural Programming allows for a regression credit-blame procedure in an evolutionary learning system. We describe a general method for an informed feedback mechanism for Neural Programming, Internal Reinforcement. We introduce an Internal Reinforcement procedure and demonstrate its use through an illustrative experiment.





Eric Teller
Tue Oct 29 14:55:57 EST 1996