Stacking is generating a set of first-level models to classify a given set of training data, then feeding their outputs into another, second-level machine learning algorithm to generate a consolidated output. The first-level models are often of the same type but generated with different training data or hyperparameters, but it is also possible to use stacking to share outputs from completely different models e.g. a neural network with a linear regression function and a decision tree.
For classification use cases, the class probabilities rather than simply the raw classifications should be fed through from the first-level models to the second-level model.
Research has suggested that linear regression (for value prediction) or logistic regression (for classification) are the best choices for the second-level model.
Note that most neural networks can be regarded as an extrapolation of the stacking paradigm where each neuron is equivalent to a model and the number of layers is increased from two.
- alias
- Blending Stacked generalization
- subtype
- has functional building block
- FBB_Classification FBB_Value prediction
- has learning style
- LST_Supervised
- has relevance
- REL_Relevant
- mathematically similar to
- typically supports