Fireworks Algorithm Optimization

I wrote an article titled “Fireworks Algorithm Optimization” in the December 2014 issue of MSDN Magazine. See


Many types of machine learning (ML) involve some sort of mathematical equation that makes predictions. Examples include logistic regression classification, and neural network classification. These systems have a set of numeric values called weights. Training the system is the process of finding the values of the weights so that the error (between computed output values and known output values in a set of training data) is minimized. In other words, training is a numerical optimization problem.

There are at least a dozen different major numerical optimization algorithms. Examples include simple gradient descent, back-propagation, and particle swarm optimization. Fireworks algorithm optimization (FAO) is a new idea proposed in 2010. The algorithm is named as it is because when the FAO process is displayed on a graph, the image sort of resembles a set of exploding fireworks.

In the article, I present a demonstration coded using the C# language. I found FAO quite interesting. In my opinion, there’s not enough research evidence to state exactly how effective FAO is or isn’t.

This entry was posted in Machine Learning. Bookmark the permalink.