An autoencoder is a neural network that is similar to a perceptron both in its overall structure and in its neuron behaviour. However, the output layer has the same number of neurons as the input layer and training involves trying to maximize the similarity of output layer values to the corresponding input layer values for each training data item. The autoencoder can then be used to normalise noisy input data by putting it through the autoencoder and replacing it with the obtained outputs.

A sparse autoencoder places constraints on the total amount of activation permitted at any given one time within the hidden neurons that link the input and output layers. This enables these neurons to learn the most salient features within the training data, which can be used for both feature discovery and dimensionality reduction. If the input is pictorial, the learned features can be visualised by stimulating each hidden neuron in turn and recovering the input from the input (or output) layer. A stacked autoencoder uses this facility to initialize or pre-train a multilayer neural network. Useful weights for each layer are determined using an autoencoder that maps the previous layer to itself and learns the salient features. A good visual explanation can be found here. See also the entry about restricted Boltzmann machines which perform a similar task.

Autoassociator Diabolo network
Sparse autoencoder Stacked autoencoder
has functional building block
FBB_Classification FBB_Dimensionality reduction FBB_Feature discovery
has input data type
IDT_Binary vector
has internal model
INM_Neural network
has output data type
ODT_Binary vector
has learning style
has parametricity
PRM_Nonparametric with hyperparameter(s)
has relevance
sometimes supports
mathematically similar to