I contributed to an article titled “Researchers Explore Machine Learning Hyperparameter Tuning Using Evolutionary Optimization” in the November 2022 edition of the Pure AI web site. See https://pureai.com/articles/2022/11/01/evolutionary-optimization.aspx.
When data scientists create a machine learning prediction model, there are typically about 10 to 20 hyperparameters — variables where the values must be determined by trial and error guided by experience and intuition.
Finding good values for model hyperparameters is called hyperparameter tuning. For simple machine learning models, the most common tuning approach used by data scientists is to manually try different permutations of hyperparameter values.
The artcile describes three different techniques for machine learning hyperparameter tuning. Briefly, an old technique called evolutionary optimization works well for the new generation of neural systems, including transformer architecture systems, that have billions of weights and billions of hyperparameter combinations.
create population of random solutions loop max_generation times pick two parent solutions use crossover to create a child solution use mutation to modify child solution evaluate the child solution replace a weak solution in population with child end-loop return best solution found
I’m quoted in the article: “Evolutionary optimization for hyperparameter tuning was used as early as the 1980s when even simple neural prediction models were major challenges because of the limited machine memory and CPU power.”
“The recent focus on huge prediction models for natural language processing based on transformer architecture, where a single training run can take days and cost thousands of dollars, has spurred renewed interest in evolutionary optimization for hyperparameter tuning.”
Mutated animals (usually by radiation) that become intelligent are a staple of science fiction. Researcher Viktor Toth has taught rats how to play Doom II. No mutation needed.
You must be logged in to post a comment.