Skip to content

Instantly share code, notes, and snippets.

@michaelChein
Created February 11, 2019 19:54
Show Gist options
  • Save michaelChein/178bd11895abcc98b80216453f5319bd to your computer and use it in GitHub Desktop.
Save michaelChein/178bd11895abcc98b80216453f5319bd to your computer and use it in GitHub Desktop.
loop pseudocode for backprop article
def train_network(network, iterations, alpha):
for i in range(iterations):
# forward
for layer in network[1:]:
layer.nodes = activation_function((layer-1).nodes @ layer.weights)
# backward
for layer in network.reverse(): # We iterate our network in reverse order.
if layer is output_layer: # Calculate the loss
∂_L = network[L].nodes - labels
elif layer is (output_layer-1): # These are the first weights to be updated (W100 in the diagrams)
∆w.append(alpha * layer-1.nodes.T @ (∂_L * activation_derivative((layer).nodes)))
else:
∂_L = ∂_L @ (layer+1).weights.T * activation_derivative((layer+1).nodes)
∆w.append(alpha * (layer-1).nodes.T @ (∂_L * activation_derivative(layer.nodes)))
# update weights:
network.weights += ∆w
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment