There are many interesting numerical optimization algorithms that are inspired by biology and physics. The Big Bang Big Crunch (BBBC) algorithm is one such algorithm. At a very high level, the algorithm initially starts by generating a set of “masses” where each mass corresponds to a potential solution to the optimization problem. This is the Big Bang phase. Next, the masses converge to a single mass, which is the Big Crunch.
The process is repeated many times, and over time, the converged mass approximates the best solution to the optimization problem. Of course, the details are a bit tricky.
I’ve never implemented BBBC in code, but for me, that’s the only way I’ll satisfy myself that I completely understand the algorithm. So, I’ll code up BBBC someday when I have a free hour or two. A good explanation of BBBC is in “A New Damage Detection Method: Big Bang-Big Crunch (BB-BC) Algorithm”, Tabrizian, Afshari, et al., 2013.
Bio-inspired and physics-inspired algorithms like BBBC are, in my opinion, not very practical for solving problems like deep neural network training. Calculus based techniques, based on gradient descent, dominate optimization. But as computing power increases, the practicality of non-Calculus techniques increases.
initialize N random masses, bang loop compute center of mass sum(1/fi * xi) / sum(1/fi) compute new masses around center of mass end-loop return center of mass