LSTM trained on Classifier

About

I was excited to start looking through deeplearing.js and refamiliarize myself with RNN/LSTM models. But first, I had an interesting detour with my old friend SketchRNN. I had the strange idea that I could repurpose the Inter.js sketch (on the new Magenta release of SketchRNN) to interpolate different fonts, instead of interpolating the original drawing models. 

 

Since this sketch relied on model weights to "mix" two different drawings, I thought the only way to make this work would be to train a model for each letter of each font. To interpolate a word with those fonts, I would separately draw each letter at different stages and combine them into the final word written as a mixed font. This seemed to be working, and after making each letter compatible with the updated SketchRNN model, I had one model trained and ready to go. Sampling this model took longer than expected though, so I figured it created too many "lines" from the SVG files of the letters. I decided to start from scratch - maybe I'm getting better at seeing a lost cause. (but first, I tried to manually mix them in p5). 

 

I headed back to deeplearning.js and decided to retrain an LSTM model with a video classifier as its input. My thinking was if a video had a predictable sequence, a user could predict future classifications of that video recording. With how seemingly random the classifier output was, I expect the LSTM model to create complete jibberish.

Screen Shot 2017-10-27 at 5.03.55 PM.png
Screen Shot 2017-10-28 at 12.48.54 PM.png
Screen Shot 2017-10-30 at 8.45.05 PM.png