Multi-Swarm Optimization for Neural Networks Using C#

I wrote an article titled “Multi-Swarm Optimization for Neural Networks Using C#” in the January 2015 issue of Visual Studio Magazine. See


Many machine learning (ML) systems require code that minimizes error. In the case of neural networks, training a network is the process of finding a set of values for the network’s weights and biases so that the error between the known output values in some training data and the computed output values is minimized. Algorithms that minimize error are also called optimization algorithms.

There are roughly a dozen different optimization algorithms commonly used in machine learning. For example, back-propagation is often used to train simple neural networks, and something called L-BFGS is often used to train logistic regression classifiers. Multi-swarm optimization (MSO) is a variation of Particle Swarm Optimization (PSO).

In PSO there is one swarm of particles where each particle moves based on its current speed and direction, the best particle position found to date, and the best position found by any particle in the swarm. MSO extends PSO by maintaining several swarms of particles.

This entry was posted in Machine Learning. Bookmark the permalink.