The difference between evolutionary selection and standard iterative learning is that with evolutionary selection it is the inclusion of parameters or structures in the model and the way they are related to one another that is being learned rather than their values. Selection techniques modelled on biological evolution can be used to generate or tweak a variety of algorithms:
- A large number of strategies (in reinforcement learning) or parameter combinations (in supervised learning) can be generated randomly and tested against one another to find the most promising candidates. Selecting a number of candidates for further processing reduces the likelihood of an algorithm getting stuck in a local minimum;
- On a regular basis, small changes can be made to an existing parameter combination. If the changes are observed to improve the results, they are retained for future episodes. Note that such ‘algorithm tweaking’ is normally only referred to as evolutionary when applied to supervised learning; from the viewpoint of reinforcement learning it is a standard defining feature common to all algorithms unless it is applied at a meta-level.
- alias
- subtype
- has functional building block
- FBB_Value prediction FBB_Classification FBB_Behavioural modelling
- has learning style
- LST_Supervised LST_Reinforcement
- has relevance
- REL_Relevant
- mathematically similar to
- typically supports