Training Neural Networks using Multi-Swarm Optimization

I wrote an article titled, “Using Multi-Swarm Training on Your Neural Networks” in the February 2015 issue of Visual Studio Magazine. See

You can think of a neural network as a complicated mathematical equation that has some numeric constants, called weights and biases, that must be determined so that the network can make predictions. Determining the values of the weights and biases is called training the network.


Training is done by using a set of data that has known input and output values. Training tries different values for the weights and biases, trying to find values so that the neural network’s computed output values are very close to the known, correct output values in the training data.

There are several algorithms that can be used to train a neural network. The most common is a calculus based technique called the back-propagation algorithm. An alternative is particle swarm optimization (PSO). PSO loosely models the behavior of groups, such as schools of fish and flocks of birds.

Multi-swarm optimization (MSO) extends PSO by using several swarms of particles instead a just a single swarm. Using multiple swarms prevents the training process from getting stuck at a good, but not optimal, solution for the values of the weights and biases.

This entry was posted in Machine Learning. Bookmark the permalink.