## Particle Swarm Optimization using Python

Particle swarm optimization (PSO) is a technique to solve a numerical optimization problem. A numerical optimization problem is one where the goal is to minimize some error term (or, more rarely maximize some value). For example, suppose you have some math equation that predicts the score a football team will make in an upcoming game, based on the team’s current winning percentage, and other predictor variables. You must find values for the constants that make up the prediction equation. If you have historical data with known predictor values and known actual scores, numerical optimization would find the values for the constants so that the error (actual points scored compared to predicted points scored) is minimized.

In many problem scenarios, there are techniques based on classical calculus or matrix algebra you can use for numerical optimization. But for some problems, these standard techniques don’t work well and that’s where PSO comes in.

In PSO, you have a set of “particles”. Each particle represents a possible solution. Particles move towards a new location/solution based on 1.) their current direction, 2.) the best location/solution they’ve encountered o far, and 3.) the best location/solution that’s been found by any of the particles in the swarm.

I coded a PSO demo program that solves a standard, very difficult, benchmark problem called Rastrigin’s function.

```# particleswarm.py
# python 3.4.3
# demo of particle swarm optimization (PSO)
# solves Rastrigin's function

import random
import math    # cos() for Rastrigin
import copy    # array-copying convenience
import sys     # max float

# ------------------------------------

def show_vector(vector):
for i in range(len(vector)):
if i % 8 == 0: # 8 columns
print("\n", end="")
if vector[i] >= 0.0:
print(' ', end="")
print("%.4f" % vector[i], end="") # 4 decimals
print(" ", end="")
print("\n")

def error(position):
err = 0.0
for i in range(len(position)):
xi = position[i]
err += (xi * xi) - (10 * math.cos(2 * math.pi * xi)) + 10
return err

# ------------------------------------

class Particle:
def __init__(self, dim, minx, maxx, seed):
self.rnd = random.Random(seed)
self.position = [0.0 for i in range(dim)]
self.velocity = [0.0 for i in range(dim)]
self.best_part_pos = [0.0 for i in range(dim)]

for i in range(dim):
self.position[i] = ((maxx - minx) *
self.rnd.random() + minx)
self.velocity[i] = ((maxx - minx) *
self.rnd.random() + minx)

self.error = error(self.position) # curr error
self.best_part_pos = copy.copy(self.position)
self.best_part_err = self.error # best error

def Solve(max_epochs, n, dim, minx, maxx):
rnd = random.Random(0)

# create n random particles
swarm = [Particle(dim, minx, maxx, i) for i in range(n)]

best_swarm_pos = [0.0 for i in range(dim)] # not necess.
best_swarm_err = sys.float_info.max # swarm best
for i in range(n): # check each particle
if swarm[i].error < best_swarm_err:
best_swarm_err = swarm[i].error
best_swarm_pos = copy.copy(swarm[i].position)

epoch = 0
w = 0.729    # inertia
c1 = 1.49445 # cognitive (particle)
c2 = 1.49445 # social (swarm)

while epoch < max_epochs:

if epoch % 10 == 0 and epoch > 1:
print("Epoch = " + str(epoch) +
" best error = %.3f" % best_swarm_err)

for i in range(n): # process each particle

# compute new velocity of curr particle
for k in range(dim):
r1 = rnd.random()    # randomizations
r2 = rnd.random()

swarm[i].velocity[k] = ( (w * swarm[i].velocity[k]) +
(c1 * r1 * (swarm[i].best_part_pos[k] -
swarm[i].position[k])) +
(c2 * r2 * (best_swarm_pos[k] -
swarm[i].position[k])) )

if swarm[i].velocity[k] < minx:
swarm[i].velocity[k] = minx
elif swarm[i].velocity[k] > maxx:
swarm[i].velocity[k] = maxx

# compute new position using new velocity
for k in range(dim):
swarm[i].position[k] += swarm[i].velocity[k]

# compute error of new position
swarm[i].error = error(swarm[i].position)

# is new position a new best for the particle?
if swarm[i].error < swarm[i].best_part_err:
swarm[i].best_part_err = swarm[i].error
swarm[i].best_part_pos = copy.copy(swarm[i].position)

# is new position a new best overall?
if swarm[i].error < best_swarm_err:
best_swarm_err = swarm[i].error
best_swarm_pos = copy.copy(swarm[i].position)

# for-each particle
epoch += 1
# while
return best_swarm_pos
# end Solve

print("\nBegin particle swarm optimization
using Python demo\n")
dim = 3
print("Goal is to solve Rastrigin's function in " +
str(dim) + " variables")
print("Function has known min = 0.0 at (", end="")
for i in range(dim-1):
print("0, ", end="")
print("0)")

num_particles = 50
max_epochs = 100

print("Setting num_particles = " + str(num_particles))
print("Setting max_epochs    = " + str(max_epochs))
print("\nStarting PSO algorithm\n")

best_position = Solve(max_epochs, num_particles,
dim, -10.0, 10.0)

print("\nPSO completed\n")
print("\nBest solution found:")
show_vector(best_position)
err = error(best_position)
print("Error of best solution = %.6f" % err)

print("\nEnd particle swarm demo\n")

```

PSO isn’t guaranteed to find an optimal solution to a problem but often works remarkably well.

This entry was posted in Machine Learning. Bookmark the permalink.