Programming A-Z

LSTM trained on Classifier

About

I was excited to start looking through deeplearing.js and refamiliarize myself with RNN/LSTM models. But first, I had an interesting detour with my old friend SketchRNN. I had the strange idea that I could repurpose the Inter.js sketch (on the new Magenta release of SketchRNN) to interpolate different fonts, instead of interpolating the original drawing models. 

 

Since this sketch relied on model weights to "mix" two different drawings, I thought the only way to make this work would be to train a model for each letter of each font. To interpolate a word with those fonts, I would separately draw each letter at different stages and combine them into the final word written as a mixed font. This seemed to be working, and after making each letter compatible with the updated SketchRNN model, I had one model trained and ready to go. Sampling this model took longer than expected though, so I figured it created too many "lines" from the SVG files of the letters. I decided to start from scratch - maybe I'm getting better at seeing a lost cause. (but first, I tried to manually mix them in p5). 

 

I headed back to deeplearning.js and decided to retrain an LSTM model with a video classifier as its input. My thinking was if a video had a predictable sequence, a user could predict future classifications of that video recording. With how seemingly random the classifier output was, I expect the LSTM model to create complete jibberish.

Screen Shot 2017-10-27 at 5.03.55 PM.png
Screen Shot 2017-10-28 at 12.48.54 PM.png
Screen Shot 2017-10-30 at 8.45.05 PM.png

Color Word2Vec

About

In experimenting with a JS based Word2Vec model, I had an ulterior motive to get comfortable with Three.js. The premise of this sketch was to visualize color gradients in 3D space and use a word2vec model to predict relationships between colors. The color tiles on the top bar of the screen represents a "this is to that, as this is to ___" prediction method, where a the final color is determined based on an object's position. The prediction works well - the user interface needs some work, especially in fine tuning the dragging/selection method. 

Screen Shot 2017-10-23 at 4.44.41 PM.png
Screen Shot 2017-10-23 at 8.08.22 PM.png
Screen Shot 2017-10-24 at 8.15.38 AM.png
Screen Shot 2017-10-24 at 12.20.55 AM.png
Screen Shot 2017-10-24 at 10.20.19 AM.png

Regex Image

About

As a supplement to my experiment The Last Question, I set out to generate an image with regex characters and symbols. I kept the short story generation in the first iteration, and added a function that *attempts* to redraw an uploaded file with characters based on the image pixel value. The characters are selected based on their pixel value. 


Screen Shot 2017-09-19 at 10.24.20 AM.png
Screen Shot 2017-09-19 at 10.31.19 AM.png
Screen Shot 2017-09-19 at 10.31.33 AM.png
Screen Shot 2017-09-19 at 10.33.43 AM.png
Screen Shot 2017-09-19 at 10.37.56 AM.png
Screen Shot 2017-09-19 at 10.40.01 AM.png
Screen Shot 2017-09-19 at 2.19.56 PM.png
Screen Shot 2017-09-19 at 2.20.20 PM.png

The Last Question

About

live website

This website was an experiment in accessing DOM elements in p5 and generating text with Rita.js. Once an image is uploaded, the RGB pixel value of each pixel determines the generated text sourced from Isaac Asimov's short story, "The Last Question". I mapped the combined RGB value of each pixel to the order of words as they appear in the short story. The darker the pixel, the further the word lives in the text.


Screen Shot 2017-09-10 at 6.57.55 PM.png
Screen Shot 2017-09-12 at 1.02.28 AM.png
Screen Shot 2017-09-12 at 12.23.51 AM.png
Screen Shot 2017-09-12 at 2.07.44 AM.png