A neural network is a data structure inspired by the biological brain.
Neural networks are used in various fields such as image and speech recognition, natural language processing, and game playing. They consist of neurons linked by synapses. A neuron receives input impulses via synapses and performs some function upon those impulses that determines the strength of the output impulse that should be sent to further neurons via further synapses. During training, the processing of an impulse by a neuron slightly changes the way that neuron will react to other impulses in the future. This allows the whole network to adapt itself to the data it processes, learning from experience just as the brain does.
The most important ways in which the various types of neural networks differ are their topology, the nature of their input and output values, their propagation function, and their activation function. Some types of neural networks have strict rules about which neurons can accept external input and provide external output and maintain a concept of layers where neurons in each layer are only allowed to accept input from the preceding layer and to provide output to the subsequent layer. Others consist of a homogeneous set of neurons with little or no restrictions on which pairs can communicate with one another. The nature of their input values can be binary, categorical, or quantitative. Output values can be binary, categorical, or quantitative. Their propagation function is the way in which input impulses received via a synapse are combined and processed to form a single input value to the activation function. In a typical neural network, each synapse has a weight that is multiplied by the value of each impulse received along it, and the results of the multiplications for all input synapses are added together to generate the input value to the activation function. The weights are what most typically change as a neural network learns. Their activation function broadly decides whether or not a neuron should fire based on its input. The simplest type of activation function is a binary threshold function that outputs 1 if the input was above a certain value and 0 otherwise. Other functions that approximate the binary threshold function are normally used instead, including the logit function also used in logistic regression. All neurons in a network using such a function will always fire whatever input they receive, but those that are not activated will fire so weakly as to have no practical effect on their successors.
In summary, neural networks are powerful tools for learning from data and making predictions. They are important concepts because of their wide applicability and ability to model complex patterns.
- Alias
- Artificial Neural Network NN
- Related terms
- Deep Learning Activation Function Propagation Function