A Restricted Boltzmann Machine (RBM) is a type of neural network used for unsupervised learning.
RBMs are typically used for feature learning, dimensionality reduction, and initializing deep neural networks. They consist of an input layer and a single hidden layer with no connections within layers, only between them. The neurons in the hidden layer are randomly initialized and learn to capture the important features of the input data.
For example, in image processing, an RBM can learn to identify features such as edges or textures by adjusting the weights between the input and hidden layers. These learned features can then be used to initialize a deep neural network for tasks like image classification.
RBMs are important because they provide a way to learn useful representations of data without labeled examples. They are foundational to more advanced models like Deep Belief Networks (DBNs) and can be used to pre-train layers in deep learning architectures.
A key difference between RBMs and autoencoders is the training method. RBMs are trained using energy minimization, which involves adjusting the weights to minimize the energy of the system, making the model more stable and efficient. This contrasts with autoencoders, which are trained using backpropagation.
A Gaussian-binary Restricted Boltzmann Machine (GRBM) extends the RBM to handle continuous input data using a Gaussian distribution. This allows the model to process a wider range of data types, making it versatile for various applications.
- Alias
- RBM Gaussian-binary Restricted Boltzmann Machine GRBM
- Related terms
- Neural Network Autoencoder Energy Minimization