Evolutionary Selection

Supporting Technique

Evolutionary selection is a technique that uses principles of biological evolution to optimize model parameters or structures.

The difference between evolutionary selection and standard iterative learning is that with evolutionary selection it is the inclusion of parameters or structures in the model and the way they are related to one another that is being learned rather than their values. Selection techniques modeled on biological evolution can be used to generate or tweak a variety of algorithms.

A large number of strategies in reinforcement learning or parameter combinations in supervised learning can be generated randomly and tested against one another to find the most promising candidates. Selecting a number of candidates for further processing reduces the likelihood of an algorithm getting stuck in a local minimum.

On a regular basis, small changes can be made to an existing parameter combination. If the changes are observed to improve the results, they are retained for future episodes. Note that such ‘algorithm tweaking’ is normally only referred to as evolutionary when applied to supervised learning; from the viewpoint of reinforcement learning it is a standard defining feature common to all algorithms unless it is applied at a meta-level.

For example, consider a genetic algorithm used to optimize the hyperparameters of a neural network. The algorithm starts with a population of random hyperparameter combinations, evaluates their performance, and selects the best-performing combinations to create a new generation through crossover and mutation. This process is repeated until the optimal hyperparameters are found.

Alias
Related terms
Reinforcement Learning Genetic Algorithms Metaheuristics