Particle swarm optimization (PSO) is a non-Calculus optimization technique. It loosely simulates the behavior of a collection of items in a swarm, such as a flock of birds or school of fish.

The idea is that you have several particles, each of which represents a possible solution to the optimization problem. In each iteration of PSO, each particle moves based on three things: the particle’s current velocity (speed and direction), the best remembered position the particle has see so far, and the best remembered position seen by any particle in the swarm.

*Rastrigin’s Function in Two Dims*

In the image below, the goal is to solve some problem where the solution is at (0.0, 0.0). I show just two particles (there would be many). The first particle is green and starts at (-75.0, 50.0). The second particle start at (95.0, 52.0). On the first iteration, both particles will move a bit towards each other. Over time, the particles will tend to circle in on the correct answer.

I decided to code up PSO using Python with NumPy. My demo problem is to solve Rastrigin’s function in three dimensions, which has solution (0, 0, 0).

Moral of the story: PSO is not used very much, mostly because it’s much slower than algorithms such as stochastic gradient descent. But I suspect that as computers become increasingly powerful, at some point in time, PSO will emerge as a valuable tool for training deep neural networks.

### Like this:

Like Loading...

*Related*

Is a partical swarm then essentially a successor of a genetic algorithm?

And they be combined?

Genetic algorithm is even slower and more complex than PSO. PSO gives better results so we can say it is a succesor.