Coding Neural Network Back-Propagation using C#

I wrote an article titled “Coding Neural Network Back-Propagation using C#” in the April 2015 issue of Visual Studio Magazine. See


As I explain in the article, you can think of a neural network as a complex mathematical function that accepts numeric inputs and generates numeric outputs. The values of the outputs are determined by the input values, the number of so-called hidden processing nodes, the hidden and output layer activation functions, and a set of weights and bias values.

A fully connected neural network with m inputs, h hidden nodes, and n outputs has (m * h) + h + (h * n) + n weights and biases. For example, a neural network with 4 inputs, 5 hidden nodes, and 3 outputs has (4 * 5) + 5 + (5 * 3) + 3 = 43 weights and biases. Training a neural network is the process of finding values for the weights and biases so that, for a set of training data with known input and output values, the computed outputs of the network closely match the known outputs.

By far the most common technique used to a train neural network is called the back-propagation algorithm. Back-propagation is quite complicated. The key to back-propagation is to compute what’s called a gradient for each weight and bias. The gradient is a numeric value that tells you whether to increase or decrease its associated weight or bias, and gives you a hint at how big the increase or decrease should be.

This entry was posted in Machine Learning. Bookmark the permalink.