15-494/694 Cognitive Robotics Lab 9: Auto-Encoder Networks
I. Software Update, SDK Update, and Initial Setup
Note: You can do this lab/homework assignment either
individually, or in teams of two.
At the beginning of every lab you should update your copy of the
cozmo-tools package. Do this:
$ cd ~/cozmo-tools
$ git pull
II. Auto-Encoder Networks
- Read about auto-encoder
networks here.
- Read about the 5-2-5 encoder network here (slides 15-16).
- Download and read through the encoder.py demo.
This demo trains an 8-2-8 auto-encoder. The hidden and output units use a tanh activation function.
- Run the demo. While training the network, every 100 epochs it
plots the hidden unit activations for the 8 input patterns. Each
pattern is plotted as a point in the two-dimensional hidden unit
state space. How do the hidden unit patterns self-organize over
time?
- Why does this model train for 5000 epochs while the mnist3 model
trained for just 15?
- Just as we visualized the input-to-hidden layer by drawing the
hidden unit states, we can visualize the hidden-to-output layer by
drawing the decision boundaries (w1*h1 + h2*h2 + b) = 0 for the
output units, as shown in slides 15-16. To do this you will need to
access the weight vectors, which you can do through
model.parameters(). Add code to plot these lines.
- Write encoder2.py that trains a 15-3-15 encoder. How do the hidden unit
states self-organize in this three-dimensional space?
We are going to use an auto-encoder to detect cubes. Stay tuned.
Hand In
Nothing yet.
|