Recurrent Neural Networks

I wrote an article titled “Step Up To Recurrent Neural Networks” in the October 2015 issue of Visual Studio Magazine. See


Neural networks make predictions. For example, predicting the political leaning (conservative, moderate, liberal) of a person based on features such as age, annual income, sex (male, female), and so on. Regular neural networks are called feed-forward networks because they process each data item in a sequential input-process-output manner.


Recurrent neural networks have internal feedback loops that allow them to have a kind of memory. This design allows recurrent neural networks to solve some types of problems that can’t be solved by regular feed-forward networks. For example, predicting a handwritten letter is a type of problem that is well suited for recurrent neural networks because the value of a letter depends to some extent on the value of the previous letter (if a previous letter is ‘w’ then the next letter is much more likely to be a ‘h’ than a ‘z’, and so on).

There are actually several different types of recurrent neural networks. My article describes one of the simplest forms, sometimes called an Elman network.

This entry was posted in Machine Learning. Bookmark the permalink.