15-494/694 Cognitive Robotics Lab 7: Neural Nets and ALVINN
You can do this lab solo or as a team of 2 people, but not more than 2.
I. Neural Net Training
- Run
the encoder.py
demo by downloading the file and typing
python3 -i
encoder.py . Then type encoder(6) to try a harder problem.
- How does neural net learning scale with problem difficulty? Fill in the table to investigate this:
Problem | Trial #1 Epochs | Trial #2 Epochs |
Trial #3 Epochs |
Trial #4 Epochs | Trial #5 Epochs | Average |
encoder(4) | | | | | | |
encoder(5) | | | | | | |
encoder(6) | | | | | | |
encoder(8) | | | | | | |
encoder(10) | | | | | | |
II. Experiment with Classic ALVINN
- Make a lab7 directory.
- Download the file data.zip into your lab7 directory and unzip it.
- Make a lab7/python subdirectory and download the
file alvinn1.py into it.
- Read the alvinn1.py source code.
- Run the model by typing "python3 -i alvinn1.py". The "-i" switch tells python not to exit
after running the program. Move the 3 windows apart so they don't overlap.
- The hidden unit weights are displayed in Figure 2. You can examine an individual
hidden unit up close, e.g., unit 2, by typing
show_hidden(2) .
- How balanced is the training set? Generate a histogram plot of desired steering directions.
- Because these are single-lane roads, we can double the training
set size by flipping the input images and also the desired output
patterns. Modify alvinn1.py to do that. How does this affect the
model?
- Retrieve the model parameters:
p = list(model.parameters())
- Let's see what the parameters look like:
[param.shape for param in p]
- What do the output unit bias connections look like?
- Turn off weight decay; how does this affect the loss? How does
it affect the weights?
- Try increasing the learning rate (lr) from 0.1 to 0.5. What
effect does this have on the learning behavior?
- Type
test_alvinn() to run the network on a test set of 97 similar road images. How
well does it do?
- Continue the training by typing
train_alvinn() again. Have we reached aymptote? How
well do we do on the test set now?
- Write a function test_alvinn2() to test the network on two lane
roads, which are also supplied in the ALVINN dataset. Note that you
should not flip the two-lane road images.
- Use the supplied function
closest_gaussian to compare the shapes of the
gaussians produced for two-lane roads to the ideal gaussians supplied to you in the variable
gaussians by plotting one against the other. Make a
similar comparison for the output patterns produced on the test
set of one-lane roads. This difference from an idealized gaussian
is what Pomerleau called Output Appearance Reliability Estimation
(OARE). Can we reliably detect two lane road images using
OARE?
Hand In
Hand in the following in a file called handin.zip:
- Your partner name if you did this lab as a team of 2. (Both of you should make
separate hand-ins in Autolab but you can submit the same files.)
- Your table of results for the encoder experiment.
- Your modified alvinn1.py file.
- A brief writeup describing your observations about performance of the network:
- Show your histogram of steering directions in the training set.
- What were the effects of manipuating the weight decay parameter?
- What was the effect of increasing the learning rate by a large amount?
- Using the original values of weight decay and learning
rate, what is the mean loss on the original training set?
- What is the mean loss on the expanded training set?
- What is the mean loss on the test set?
- Compare with the mean loss on two-lane roads.
- How does OARE differ between the training set, test set, and two lane roads?
|