This library is a simple implementation of the backpropagation algorithm. The library is written in Python and uses the numpy library for matrix operations.
In machine learning, backpropagation is a gradient estimation method commonly used for training a neural network to compute its parameter updates (source Wikipedia).
The backpropagation algorithm is particuraly useful in neural networks where it is used to update the weights of the network in order to minimize the error of the network.
The backpropagation algorithm consists of the following steps:
- Forward pass
- Compute the error
- Backward pass
- Update the weights
The forward pass is the process of computing the output of the network given an input. The output of the network is computed by propagating the input through the network and applying the activation function to the output of each layer.
Below is a simple example of a neural network that is called "perceptron" with one input layer and one output layer.
Source: Sharp Sight.
The input values here correspond to the values that are fed into the network. The summation node or neuron is the dot product between the input values and the weights of the network, plus a certain value called the bias. In other words:
Where
After that, the summation is passed through an activation function. The activation function is used to introduce non-linearity into the network, it can be a step function, sigmoid function, ReLU function, etc. The one used in the picture above is called the step function:
The output of the network is the output of the activation function.
The error is computed by comparing the output of the network with the true value. The error is computed using a loss function, for example, the mean squared error:
Where
The backward pass is the process of computing the gradient of the loss function with respect to the weights of the network.
Source: Analytics Vidhya.
