Skip to content

Instantly share code, notes, and snippets.

@SkalskiP
Created November 1, 2018 10:38
Show Gist options
  • Save SkalskiP/e63935aff22bd951329cda8014833bb3 to your computer and use it in GitHub Desktop.
Save SkalskiP/e63935aff22bd951329cda8014833bb3 to your computer and use it in GitHub Desktop.
Mini-batch gradient descent
def train_batch(X, Y, nn_architecture, epochs, learning_rate, batch_size = 64, verbose=False, callback=None):
params_values = init_layers(nn_architecture, 2)
cost_history = []
accuracy_history = []
# Beginning of additional code snippet
batch_number = X.shape[1] // batch_size
# Ending of additional code snippet
for i in range(epochs):
# Beginning of additional code snippet
batch_idx = epochs % batch_number
X_batch = X[:, batch_idx * batch_size : (batch_idx + 1) * batch_size]
Y_batch = Y[:, batch_idx * batch_size : (batch_idx + 1) * batch_size]
# Ending of additional code snippet
Y_hat, cashe = full_forward_propagation(X_batch, params_values, nn_architecture)
grads_values = full_backward_propagation(Y_hat, Y_batch, cashe, params_values, nn_architecture)
params_values = update(params_values, grads_values, nn_architecture, learning_rate)
return params_values
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment