Backpropagation in Deep Neural Networks with Example
OVERVIEW
For this tutorial, we’re going to use a neural network with two inputs, two hidden neurons, two output neurons. Additionally, the hidden and output neurons will include a bias.
Things You will Learn After This Tutorial
- Building a small Neural Network
- Initialize the weights and Biases Randomly
- Forward Pass the inputs . Calculate the Cost Function
- Computing Gradients and Backpropagate
Initializing the Network with Example
Below is the structure of our Neural Network with 2 inputs,one hidden layer with 2 Neurons and 2 output neuron. Also a Bias attached to the hidden and output layer
The goal of backpropagation is to optimize the weights so that the neural network can learn how to correctly map arbitrary inputs to outputs.
For the rest of this tutorial we’re going to work with a single training set: given inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.
Forward Pass
To begin, lets see what the neural network currently predicts given the weights and biases above and inputs of 0.05 and 0.10. To do this we’ll feed those inputs forward though the network.
We figure out the total net input to each hidden layer neuron, squash the total net input using an activation function (here we use the logistic function), then repeat the process with the output layer neurons.
how we calculate the total net input for
How we Calculate the total net output for hi:
We repeat this process for the output layer neurons, using the output from the hidden layer neurons as inputs.
THE BACKWARD PASS(BACK PROPAGATION)
Our goal with back propagation is to update each of the weights in the network so that they cause the actual output to be closer the target output, thereby minimizing the error for each output neuron and the network as a whole.
Output Layer
We need to figure out each piece in this equation.First, how much does the total error change with respect to the output?
Likewise we will Find all the weights :
Lets do it
To decrease the error, we then subtract this value from the current weight (optionally multiplied by some learning rate, eta, which we’ll set to 0.5):
We perform the actual updates in the neural network after we have the new weights leading into the hidden layer neurons
HIDDEN LAYER
WE will use a similar process as we did for the output layer but slightly different to account for the fact that the output of each hidden layer neuron contributes to the output (and therefore error) of multiple output neurons.
All set putting all things together we get
Its done .Yes we have update all our weights When we fed forward the 0.05 and 0.1 inputs originally, the error on the network was 0.298371109. After this first round of backpropagation, the total error is now down to 0.291027924. It might not seem like much, but after repeating this process 10,000 times, for example, the error plummets to 0.0000351085. At this point, when we feed forward 0.05 and 0.1, the two outputs neurons generate 0.015912196 (vs 0.01 target) and 0.984065734 (vs 0.99 target).
Reference Used
- http://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/
- https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
Code available on Github :
I have hand calculated everything. Let me know your feedback. If you like it, please recommend and share it. Thank you.