16-299: Assignment 6: Part 2 of the Lab



This assignment focuses on improving control of the balancing robot. I am providing example code for balancing, which will allow you to collect data, make a better model, and improve the control (or at least walk through the steps of designing a controller).

My modeling approach is described by the elegoo web pages. You have already (turn in Assignment 5 if you haven't already) modeled the motors driving the wheels. I talk about modeling the body and using the IMU on the web page Estimating missing measurements: wheel velocity and body angle. I talk about designing a controller on the web page Getting the robot to balance.


Example software and data

Here is updated example software to work with. Use the Arduino program test1_MPU6050 to test your IMU (accelerometers and gyros) and estimate the sensor biases. To estimate the sensor biases (you care about accelerometer_y and gyro_x biases) prop your robot up so it is vertical, and then run test1_MPU6050, which will print out averaged sensor readings. Copy the accelerometer_y and gyro_x averages to ay0 and gx0 in the balance10 program described below.


balance10 is the updated example balance software. After updating ay0 and gx0 from your IMU test above, try running this program on your robot. In theory the robot starts leaning on the plastic part that sticks out front. You can make it easier for the robot (and more likely that the balance program will work) by reducing this initial lean by taping some cardboard or foam to the bottom of the plastic part it leans on. You can make it safer for the robot by attaching some chopsticks to the sides of the robot sticking out the back, so it can't fall on its back, or attaching some more foam to the back. Be sure to strain relief both ends of the USB cable, so when the robot zooms off in some random direction you don't break the connector on the Arduino or your laptop.


Here is some example data from my robot balancing. The last run f003 is a failed launch.


Part 1: Get your robot to balance using the example balance software

Make a video of the robot balancing and put it on YouTube. Put "16299 Elegoo Tumbller Robot" in the video title (along with anything else you want) so it will later show up when people google any of those terms. Make the video public. Turn in the URL along with what is requested below.

What did you have to do to get the robot to balance at all? Just adjust the sensor biases? Change the controller gains? Change the state estimator? Something else?


Part 2: Collect data and make a model of the robot

Use the putty technique we have used before to collect data while the robot is balancing. Use this data to make a model of the robot. The model should predict the next state, given the current state and action. To make sure the model is not just of the robot standing still, you will have to have the robot do different motions while balancing. One way to do this is to launch from different initial leans in different directions. Another way to do this is develop new launching behaviors where you add a feedforward control. Yet another way to do this is to drive the robot along trajectories, or add feedforward commands to "perturb" the robot while it is balancing. The example code shows how to drive the robot along sinusoidal and minimum jerk (maximally smooth) trajectories.

The model can be a linear state space model with A, B, C, and D matrices, or a nonlinear model that covers a wider range. The model can be anything else you want to try out, like a neural network model.

The most important thing to turn in for this section is a writeup of what you did and why you did it. What were the hypotheses or questions you were exploring, and what were the results? What did you learn?


Part 3: Refine the existing controller (change the gains) or develop a new controller that works better

Based on your model from Part 2, refine the existing controller (change the gains) or develop a new controller that works better. You could design a new LQR controller based on your model. You could develop a parameterized nonlinear controller, designed by optimizing the performance of a simulation based on your model. You could try out machine learning techniques such as reinforcement learning. Note that state estimation is one of the things that can be improved as part of the controller.

One direction that is interesting to explore is to make the controller adaptive. Can you eliminate the need to measure the sensor biases in advance by estimating them as the robot moves? Can you handle loads added to the robot (added weights on the robot, or having the robot drag something) by estimating these perturbations during operation and compensating for them?

How can you demonstrate your controller works better? You could show that launches work from a wider range of initial conditions. You could show that the robot could handle bigger feedforward perturbations added to the motor commands, or biases or noise added to the sensors. You could add weights which change the body mass and the height of the body center of mass, and show the new controller handles a wider range of robot modifications.

Make a video of your improved controller working (and hopefully some exciting tests) and put it on YouTube. Put "16299 Elegoo Tumbller Robot" in the video title (along with anything else you want) so it will later show up when people google any of those terms. Make the video public. Turn in the URL along with the writeup for this part.

The most important thing to turn in for this section is a writeup of what you did and why you did it. What were the hypotheses or questions you were exploring, and what were the results? What did you learn?