Off-stream, I was able to resolve the “all zeros” problems in my predictions. I suspected previously, the activation functions were giving me trouble.

Switching to something simpler, I wanted to study to steering model. The easiest way to implement this was to just cut out the acceleration part of my existing model. Nothing like commenting out lines to improve a model, eh?

One hangup I had was that the output started to be very restricted again, often zero. I believe this was due to the final “sigmoid” activation function. This function correctly limits outputs to be between 0 and 1, but there’s a price. It’s statistically responsible when you’re computing the probability of something. But I’m really computing a floating-point value that happens to be between 0 and 1 (or -1 and 1 if you don’t scale). Ideally, I’d like the distribution of output values to look something like the distribution of actual steering values. To that end, I changed the final activation to “linear”. The downside of this is that the model can now suggest steering values below 0 and above 1. Practically speaking, this isn’t an issue; I can just clamp the values output by the model. But during training, it’s going to get a little confused. With a custom activation function, I could probably handle this as well, but I’m not overly concerned.

During the stream, I also explored how to visualize the steering. An asture viewer, robertsdionne, sent me some code. I had no idea that matplotlib had a built-in Animation function. I’ll definitely give that a try.

While I was figuring out my own visualization routine, I allowed the model to train for a bit. Ten minutes of working out while I organized my thoughts is probably the most training its been allowed to have. The results were surprisingly decent. Plots of actual and predicted steering reasonably matched up. And it didn’t wobble too much when it really wanted to drive straight. This gives me hope that with proper train, this model can actually work!