Backpropagation

From Wiki @ Karl Jones dot com
Revision as of 19:07, 6 September 2016 by Karl Jones (Talk | contribs) (Created page with "'''Backpropagation''', an abbreviation for "'''backward propagation of errors'''", is a common method of training artificial neural networks used...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Backpropagation, an abbreviation for "backward propagation of errors", is a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent.

Description

The method calculates the gradient of a loss function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function.

Backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient. It is therefore usually considered to be a supervised learning method, although it is also used in some unsupervised networks such as autoencoders.

It is a generalization of the delta rule to multi-layered feedforward networks, made possible by using the chain rule to iteratively compute gradients for each layer.

Backpropagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable.

See also

External links