Many machine learning algorithms require numerical optimization. For example, when training a neural network you have to find values of the weights and biases so that computed outputs closely match the outputs of your training data. In some situations numerical optimization problems can be solved exactly using calculus techniques but in many situations you must estimate the best solution. Some techniques to estimate the solution to difficult numerical optimization problems include simple gradient descent, particle swarm optimization, and genetic algorithms.

When experimenting with numerical optimization algorithms it’s useful to use some dummy function that has a known solution. One such function I sometimes use is f(x,y) = x * exp( -(x^2 + y^2) ). The function does not have a name that I’m aware of so I call it the double-dip function. The function has a (minimum) solution of f = -0.42888194248 at x = -0.707107 (negative square root of 2, divided by 2), and y = 0.0.

I used this double-dip function to demonstrate particle swarm optimization in an article I wrote for the November 2013 issue of Visual Studio Magazine.

To graph the function I use a program called SciLab which is a free program similar to the very pricey MatLab. Here are SciLab commands to plot the double-dip function:

-->[x,y]=meshgrid(-2:.15:2,-2:.15:2);
-->z=x .*exp(-(x.^2+y.^2));
-->g=scf();
-->g.color_map=jetcolormap(64);
-->surf(x,y,z);

The first command sets up a matrix of x-y values from -2 to +2, spaced 0.15 apart. The second command defines the function. Notice the use of the .* and .^ operators instead of * and ^. The correct use of these operators is tricky and for me at least, often boils down to trial and error. The third command creates a graph. The fourth command sets the graph's coloring scheme. And the final command plots ("surface plot") the graph.

### Like this:

Like Loading...

*Related*