Training a Neural Network using a Genetic Algorithm

By far the most common technique used to train a neural network is to use the back-propagation algorithm. Two other, less common training techniques are to use particle swarm optimization or a genetic algorithm. I wrote an article in the March 2014 issue of Visual Studio Magazine that demonstrates how to train a neural network using a genetic algorithm — “Neural Network How-To: Code an Evolutionary Optimization Solution”. See


The title of the article uses the term Evolutionary Optimization rather than Genetic Algorithm. The two terms are basically more or less interchangeable but I personally use the term genetic algorithm when I encode potential solutions as virtual chromosomes with some form of bit or binary representation, and I use the term evolutionary optimization when I encode a virtual chromosome using real values.

Why consider using an evolutionary optimization algorithm rather than standard back-propagation? The short answer is that training a neural network is as much art as it is science. Back-propagation algorithms are relatively fast, and relatively easy to code (although the algorithm itself is very deep), but are highly sensitive to the values used for the learning rate and momentum free parameters. And sometimes, back-propagation just doesn’t work, getting stuck in some sort of local minimum.

Evolutionary algorithms are relatively slower, somewhat more difficult to encode (although conceptually simpler I think than back-propagation), but are highly sensitive to the value used for the mutation rate free parameter. And sometimes genetic algorithms just don’t work, getting stuck in some sort of local minimum. In short, all neural network training algorithms have pros and cons.

This entry was posted in Machine Learning. Bookmark the permalink.