EE16B

Written 8-23-19

I had a lot of fun during EE16B, especially the labs. One of the refreshing parts of the labs was that everything was done on breadboards. I usually go straight to perfboards or PCBs whenever a messy prototype works, so this was an intriguing change. We did a bunch of smaller labs in the beginning, but the main part of the semester was building a voice controlled robot. It combined basically every part of what we learned during the semester into one neat package. I’ll just give a brief overview of the different parts of the robot.

We started off with system identification of the motors. First we collected a bunch of data on the velocity of the wheels based on different PWM signals. We then selected a small range that was roughly linear so that we could perform least squares and develop a linear model. From this linear model we were able to select an operating point, a velocity that both wheels could reasonably achieve. This would offer us the best chance at keeping the car straight and correcting for errors.

After system identification, we implemented open loop control (i.e. not using encoders) to move the car. As expected, error accumulated over time and the car did not go straight at all. One thing to note is that motors tend to need a “jolt” to start up because of friction. Jolt application tends to be uneven, but we would account for this by considering it as part of the error in our closed loop testing.

Since open loop didn’t work so well, we implemented closed loop control, taking into account the encoders to measure error. It was really interesting solving equations to determine parameters for k values and tuning the loop.

The next part would be implementing voice control. We chose 4 different words that we thought would have very different waveforms: EECS, Berkeley, Stanford (pronounced “Stanfuuurd”), and lockpicking. After recording about 30 samples of each word, we threw it into the iPython notebook to run PCA (Principal Component Analysis) to pick out the two main components that would help us distinguish words. After that we used centroid finding (which we actually learned in CS61A!) to complete our classifier. The likeliest word said in a recording is that which is closest to the corresponding centroid.

Finally, we integrated all the systems together. Our robot actually worked pretty well like half the time. Surprisingly, I don’t actually have any videos of the whole system integrated together.

Categories:

Updated:

Comments