The first real race is this weekend, so the push this past week was to finalize the model and write a control script to communicate with the car. On the model front, I decided to drop trying to model gas, preferring to have a decent model of steering instead. This also meant reducing the regularization and removing the dropout. I think the data I have may not be sufficient for these kinds of techniques. The final Mean Squared Error I got was about 0.028 or so. Not great, but ok.

Isaac Gerg was kind enough to send me a model he cooked up using the same data. His model converts the images to grayscale, but is similar in style. The other big difference is that he goes for a wider model with 1 fewer layers. The computation takes longer, but this model got MSE of about 0.02 on stream. Not half bad.

To make the model drive the car, we have to communicate over a serial port to a FUBARino. First we read input of the acceleration (as well as yaw, pitch, and roll). Then we combine this with webcam imagery (via Pygame) to make a steering prediction. This prediction gets rescaled and sent back down the serial port to direct the car. Step 3: Profit.