Skip to content

Instantly share code, notes, and snippets.

@qxj
Last active January 10, 2016 12:36
Show Gist options
  • Save qxj/c1d6d0754b7aa3125b48 to your computer and use it in GitHub Desktop.
Save qxj/c1d6d0754b7aa3125b48 to your computer and use it in GitHub Desktop.
A bare bones neural network implementation to describe the inner workings of backpropagation. https://iamtrask.github.io/2015/07/12/basic-python-network/
#!/usr/bin/env python
import numpy as np
X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
alpha,hidden_dim = (0.5,4)
np.random.seed(1)
# randomly initialize our weights with mean 0
synapse_0 = 2*np.random.random((3,hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
for j in xrange(60000):
# Feed forward through layers 0, 1, and 2
layer_1 = 1/(1+np.exp(-(np.dot(X,synapse_0))))
layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1))))
# Backpropagation:
## output error, delta = - (y - a) f'(z)
layer_2_error = - (y - layer_2)
layer_2_delta = layer_2_error * (layer_2 * (1-layer_2))
## hidden layer error, delta(l) = { delta(l+1) .* syn(l) } f'(z^l)
layer_1_error = layer_2_delta.dot(synapse_1.T)
layer_1_delta = layer_1_error * (layer_1 * (1-layer_1))
# update weights by gradient descent
synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
synapse_0 -= (alpha * X.T.dot(layer_1_delta))
if (j% 10000) == 0:
print "Error:" + str(np.mean(np.abs(layer_2_error)))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment