Neural Network Training using Back-Propagation

Training a neural network is the process of finding the values for the network’s weights and biases so that for a given set of inputs with known outputs (the training data), the neural network generates computed outputs that are as close as possible to the known training outputs. I wrote an article “Neural Network Training using Back-Propagation” in the September 2013 issue of Visual Studio Magazine. See

Training a neural network is a difficult problem because there quite a few weights and biases to solve for and essentially a nearly infinite number of possible combinations of values for the weight and biases. Training a neural network is a kind of numerical optimization problem where the goal is to minimize the error which is the difference between the known outputs of the training data and the generated outputs. There are several approaches to training a neural network. Three of the most common are back-propagation, particle swarm optimization, and evolutionary optimization.

All three training techniques require you to specify the values for free parameters. For back-propagation you must supply values for the learning rate and the momentum. For particle swarm optimization you must supply values for the inertia, cognitive, and social weights. For evolutionary optimization you must supply values for the mutation rate and the selection pressure.

Back-propagation is very elegant mathematically and tends to be the fastest training technique. However, back-propagation tends to be very sensitive to the values used for its free parameters. Some values for learning rate and momentum quickly find very good weights and bias values, but using slightly different values for learning rate and momentum may produce a situation where training does not converge to final weight and bias value at all.


This entry was posted in Machine Learning. Bookmark the permalink.