Perceptron review

This shows a perceptron unit, i, receiving various inputs Ij, weighted by a "synaptic weight" Wij.

The ith perceptron receives its input from n input units, which do nothing but pass on the input from the outside world. The output of the perceptron is a step function:

and

For the input units, Vj = Ij.

The learning scheme is very simple. Let ti be the desired "target" output for a given input pattern, and Vi be the actual output. The error (called "delta") is the difference between the desired and the actual output, and the change in the weight is chosen to be proportional to delta.

Specifically, and

where is the learning rate.

We applied this rule iteratively to teach a perceptron to generate the truth table for an OR. After each presentation of one of the for input patterns, we updated the weights according to this rule. On the fourth set of patterns (four "epochs"), it converged to the correct solution, with the two weights equal to 0.5, and a bias of -0.5.