This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Is it possible to utilize Keras callbacks to encapsulate the logic? Yes. | |
# | |
# We decouple feeding inputs from StagingArea.put() - both can be called in | |
# a separate Session.run(). Thus it's not needed to hack Keras inputs too much. | |
# Instead in one run() we assign a numpy array to a Variable (via feed_dict) | |
# and in another run() we perform StagingArea.put(). | |
# | |
# We make a callback PrefetchCallback which perform the initial assign and put() | |
# in its on_epoch_begin() method. Then in each on_batch_begin() it just runs an | |
# assign. Then get() and put() is ran by Keras in the training function. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# - prevent the variable to be used as a model parameter: trainable=False, collections=[] | |
# - allow dynamic variable shape (for the last batch): validate_shape=False | |
placeholder = tf.placeholder(dtype=tf.float32, shape=(None, 10)) | |
variable = tf.Variable(placeholder, trainable=False, collections=[], validate_shape=False) | |
with tf.Session() as sess: | |
sess.run(variable.initializer, feed_dict={placeholder: np.arange(20).reshape(2,10)}) | |
print('shape:', sess.run(tf.shape(variable))) | |
print(sess.run(variable)) | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Does StagingArea support batches of variable size? Yes. | |
# | |
# The training or validation set might not be exactly divisible by the batch | |
# size. Thus at the end one batch might be smaller. We can either ignore | |
# (incorrect with respect to the loss) it or provide batches with variable size. | |
# On the other hand we'd like to ensure the data points have the same shape. | |
# | |
# It turns out | |
import numpy as np |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Example of pipelining with StagingArea with tf.tuple(). | |
# | |
# In particular it shows a trick how to group together get() and put() | |
# operations using tf.tuple() such that we have a single operation that | |
# works like# semicolon operator: first it performs put(), then get() | |
# and returns the tensor output of get(). | |
# | |
# We could possibly use that compound operation as Keras model input | |
# so that we don't need to modify K.function() to pass additional | |
# fetches to Session.run() explicitly. However it's not sure if this |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Is it possible to utilize Keras callbacks to encapsulate the logic? Yes. | |
# | |
# We decouple feeding inputs from StagingArea.put() - both can be called in | |
# a separate Session.run(). Thus it's not needed to hack Keras inputs too much. | |
# Instead in one run() we assign a numpy array to a Variable (via feed_dict) | |
# and in another run() we perform StagingArea.put(). | |
# | |
# We make a callback PrefetchCallback which perform the initial assign and put() | |
# in its on_epoch_begin() method. Then in each on_batch_begin() it just runs an | |
# assign. Then get() and put() is ran by Keras in the training function. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Works! In this snippet we're able to get batch from StagingArea | |
# and in parallel put another batch there which is load via | |
# feed_dict (provided as a tf.Placeholder wrapped as Keras Input). | |
# | |
# So far it doesn't handle splitting batches, handling borders of | |
# pipelining correctly or switching between training/validation set. | |
# | |
# But at least it's a proof of concept that it's possible to use | |
# StagingArea with Keras. | |
# |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# This is a minimal example of using TensorFlow's StagingArea with Keras | |
# with the goal to implement double-buffering of input batches at GPU. | |
# | |
# Basically we want to have an input batch ready in GPU memory when batch | |
# computation starts and copy another batch in parallel. It should avoid | |
# waiting for host-device memcpy and allow better saturation of the GPU | |
# compute. StagingArea is a queue implementation that can have it's buffer | |
# stored in GPU memory. | |
# | |
# https://www.tensorflow.org/api_docs/python/tf/contrib/staging/StagingArea |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.datasets import mnist | |
from keras.models import Model | |
from keras.layers import Dense, Input | |
from keras.utils import to_categorical | |
num_classes = 10 | |
(x_train, y_train), (x_test, y_test) = mnist.load_data() | |
x_train = x_train.reshape(60000, 784).astype('float32') / 255 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# It's possible to serialize music21 stream to pickle. | |
# But for 1 kB of MIDI the pickle is 89 kB (!), after gzip only 8 kB. | |
# The benefit can be lower loading time compared to parsing MIDI | |
# at the cost of roughly 8x file size. | |
import music21 | |
stream = music21.converter.parse("foo.mid") | |
pickled = music21.converter.freezeStr(stream) | |
with open("foo.pickle", "wb") as f: | |
f.write(pickled) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// list properties on an object - poor man's reflection | |
for (var prop in curScore) { | |
console.log(prop) | |
} | |
//// properties of Ms:Score: | |
// addText | |
// appendMeasures | |
// appendPart | |
// composer |