Neural Network Back-Propagation using Python

I wrote an article titled “Neural Network Back-Propagation using Python” in the December 2014 issue of Visual Studio Magazine. See http://visualstudiomagazine.com/articles/2014/12/01/back-propagation-using-python.aspx.

NeuralNetworkBackPropagationUsingPython

According to several data sources, the use of the Python language is growing rapidly. Python is well suited for programs that might have to run on different operating system platforms, and for programs that don’t necessarily involve sophisticated UI. So, Python is a good choice for implementing neural networks.

There is no one, single best approach for learning a new programming language. One strategy is to carefully examine a program that isn’t too trivial but also isn’t too complex. A Python program that implements the back-propagation algorithm meets these criteria.

One of the trickiest parts of the back-propagation algorithm is not so much the algorithm itself, but rather the terminology used in the mathematical explanation of the algorithm, notably in the Wikipedia entry on the topic. To be honest, my Visual Studio Magazine article takes some liberties with terminology in the interest of simplicity. For example, a “gradient” is actually a collection of all “partial derivatives of the error function with respect to a weight from node i to node j”. But it’s easier to call one partial derivative a “gradient”. I’m working on an article that presents a C# implementation of back-propagation that exactly matches the terminology used in the Wikipedia entry on the topic.

This entry was posted in Machine Learning. Bookmark the permalink.