Generating Artificial Shakespeare using a Recurrent Neural Network

A regular feed-forward neural network (FNN) accepts an input and then makes a prediction. A recurrent neural network (RNN) accepts a sequence of inputs. RNNs are very complex, very interesting, and very powerful. RNNs have been responsible for many of the recent (since about 2014) advances in speech recognition (Siri, Cortana, Alexa) and image recognition.

The term RNN is very general — there are dozens of major variations of RNNs that are very different from each other. The most common form of an RNN is called a long short-term memory (LSTM) network. And there are many variations of LSTMs too.

The Microsoft CNTK deep learning code library can be used to create an LSTM. I stumbled across an example in the CNTK documentation (I don’t remember where — CNTK documentation is in chaos right now as they prepare to ship version 2.0 sometime this summer) that I’ve seen in several other places.

The demo takes some Shakespeare text and uses an LSTM to create a model that, given a set of input characters, generates the next character. After the model has been created, you can seed the model with one or more characters and then get a new character. Then that set of characters can generate the next character. And so on.

In the image below, after the first few iterations of training, the model isn’t very good and generates nonsense like “p ee e ii iii . . .” But after a little bit more training, the model starts generating text that actually resembles Shakespeare, such as:

Jos whath ath all

Come, come.

First Citizen:
Soft! Who comes here?

If you let the demo program run and do more training, the generated text gets better, but only up to a certain point:

Moral of the story: Recurrent neural networks such as LSTMs are complex, powerful systems for analyzing text data such as sentences and paragraphs.

Advertisements
This entry was posted in CNTK, Machine Learning. Bookmark the permalink.