Differential Evolution Optimization Example Using Python

An Evolutionary Algorithm (EA) is one of many algorithms that are loosely based on the biological ideas of genetic crossover and mutation. Differential Evolution (DE) is a specific type of EA that has a bit of structure. I’m very familiar with evolutionary algorithms, and I’d seen Differential Evolution mentioned several times in research papers, but I had never investigated DE closely — for me that means implementing an example in code.

In high-level pseudo-code, one form of a basic Evolutinary Algorithm is:

create a population of possible solutions
  pick two good solutions
  combine them using crossover to create child
  mutate child slightly
  if child is good then
    replace a bad solution with child
return best solution found

Differential Evolution differs from a general Evolutionary Algorithm in how the crossover and mutation operations are defined. In high-level pseudo-code, one basic form of DE is:

create a population of possible solutions
  for-each possible solution
    pick three random solutions
    combine the three to create a mutation
    combine curr solution with mutation = child
    if child is better than curr solution then
      replace current solution with child
return best solution found

Just for fun I coded up a short demo of Differential Evolution. The goal was to solve the Rastrigin function with dim = 3. The Rastrigin function has many peaks and valleys which makes it difficult to solve. The function has a known minimum value of 0.0 at (0, 0, 0).

The implementation was easy for me because I’ve implemented similar algorithms many times.

Even though Differential Evolution algorithms have somewhat standard forms, there are still several variations of DE. For example, when picking three exiting solutions to use for creating the mutation, instead of picking three at random you can pick the best solution plus two random solutions.

Good fun.

Evolution of women’s and men’s hairstyles, 1950s throught the 1990s.

Code below. Not thoroughly tested.

# diff_evo_demo.py
# use differential evolution to solve
# Rastrigin function for dim = 3
# Python 3.7.6

import numpy as np

def rastrigin_error(x, dim):
  # f(X) = Sum[xj^2 – 10*cos(2*pi*xj)] + 10n
  z = 0.0
  for j in range(dim):
    z += x[j]**2 - (10.0 * np.cos(2*np.pi*x[j]))
  z += dim * 10.0
  # return avg squared difference from true min at 0.0
  err = (z - 0.0)**2
  return err

def main():
  print("\nBegin Differential Evolution optimization demo ")
  print("Goal is to minimize Rastrigin function with dim = 3 ")
  print("Function has known min = 0.0 at (0, 0, 0) ")
  np.set_printoptions(precision=5, suppress=True, sign=" ")

  dim = 3
  pop_size = 10
  F = 0.5   # mutation
  cr = 0.7  # crossover
  max_gen = 100
  print("\nSetting pop_size = %d, F = %0.1f, cr = %0.2f, \
max_gen = %d " % (pop_size, F, cr, max_gen))

  # create array-of-arrays population of random solutions
  print("\nCreating random solutions and computing their error ")
  population = np.random.uniform(-5.0, 5.0, (10,dim))
  popln_errors = np.zeros(pop_size)
  for i in range(pop_size):
    popln_errors[i] = rastrigin_error(population[i], dim)
  # find initial best solution and its error
  # best_idx = np.argmin(popln_errors)
  # best_error = popln_errors[best_idx]

  # main processing loop
  for g in range(max_gen):
    for i in range(pop_size):  # each possible soln in pop
      # pick 3 other possible solns
      indices = np.arange(pop_size)  # [0, 1, 2, . . ]
      for j in range(3):
        if indices[j] == i: indices[j] = indices[pop_size-1]

      # use the 3 others to create a mutation
      a = indices[0]; b = indices[1]; c = indices[2]
      mutation = population[a] + F * (population[b] \
- population[c])
      for k in range(dim):
        if mutation[k]  5.0: mutation[k] = 5.0

      # use the mutation and curr item to create a
      # new solution item
      new_soln = np.zeros(dim)
      for k in range(dim):
        p = np.random.random()   # between 0.0 and 1.0
        if p < cr:  # usually
          new_soln[k] = mutation[k]
          new_soln[k] = population[i][k]  # use current item

      # replace curr soln with new soln if new soln is better
      new_soln_err = rastrigin_error(new_soln, dim)
      if new_soln_err < popln_errors[i]:
        population[i] = new_soln
        popln_errors[i] = new_soln_err

    # after all popln items have been processed, 
    # find curr best soln
    best_idx = np.argmin(popln_errors)
    best_error = popln_errors[best_idx]
    if g % 10 == 0:
      print("Generation = %4d best error = %10.4f \
  best_soln = " % (g, best_error), end="")

  # show final result
  best_idx = np.argmin(popln_errors)
  best_error = popln_errors[best_idx]
  print("\nFinal best error = %0.4f  best_soln = " % \
best_error, end="")
  print("\nEnd demo ")

if __name__ == "__main__":
This entry was posted in Machine Learning. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s