Skip to content

Instantly share code, notes, and snippets.

@MattChanTK
Last active November 7, 2016 08:49
Show Gist options
  • Select an option

  • Save MattChanTK/d5f6d5579c7484051f2fe35d79e0d7af to your computer and use it in GitHub Desktop.

Select an option

Save MattChanTK/d5f6d5579c7484051f2fe35d79e0d7af to your computer and use it in GitHub Desktop.
'''
-------------------
Classification Test
--------------------
'''
test_minibatch_size = 1000
sample_count = 0
test_results = []
while sample_count < num_test_samples:
minibatch = test_minibatch_source.next_minibatch(min(test_minibatch_size, num_test_samples - sample_count))
# Specify the mapping of input variables in the model to actual minibatch data to be tested with
data = {input_vars: minibatch[test_features],
labels: minibatch[test_labels]}
eval_error = trainer.test_minibatch(data)
test_results.append(eval_error)
sample_count += data[labels].num_samples
# Printing the average of evaluation errors of all test minibatches
print("Average errors of all test minibatches: %.3f%%" % (float(np.mean(test_results, dtype=float))*100))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment