Ah, Recurrent Neural Networks. So natural to describe (“Use some features computed previously in this computation too!”), but always tricky to implement in the proper libraries. I took a whack at it this time, with mixed results.

I believe I have a simple RNN working with just a single frame of lookback. At least, based on the number of parameters, I think that’s what must be happening. It’s always a little hard to tell if I’m loading the data correctly for these kinds of models.

I also learned about TimeDistributed layers, which can ease some of the burden of implementing more complex RNNs by handling the multiple inputs to each time. This would have been useful when doing Font Recognition, but at least doing it “manually”, I understood exactly what was happening. Tradeoffs of using libraries.