Skip to content

Instantly share code, notes, and snippets.

@ClementWalter
ClementWalter / Chainlink_VRF_V2_unittest.md
Created March 9, 2022 20:12
How to unit-test with Chainlink VRF V2
date
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
0 2009-Jun-08
@ClementWalter
ClementWalter / interactive_eager_few_shot_od_training_colab.ipynb
Last active January 25, 2021 11:18
interactive_eager_few_shot_od_training_colab.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ClementWalter
ClementWalter / tf_serving_heroku.md
Created September 23, 2020 15:42
How to deploy a tensorflow model on Heroku with tensorflow serving

How to deploy a tensorflow model on Heroku with tensorflow serving

After spending minutes or hours playing around with all the wonderful examples available for instance on the Google AI hub, one may wants to deploy one model or another online.

This article presents a fast, optimal and neat way of doing it with Tensorflow Serving and Heroku.

Introduction

@ClementWalter
ClementWalter / tf_serving_entrypoint.sh
Created September 23, 2020 15:36
Modified to take $PORT env variable
#!/bin/bash
tensorflow_model_server --port=8500 --rest_api_port="${PORT}" --model_name="${MODEL_NAME}" --model_base_path="${MODEL_BASE_PATH}"/"${MODEL_NAME}" "$@"
@ClementWalter
ClementWalter / Dockerfile
Last active September 23, 2020 16:08
Tf servint Heroku dockerfile
FROM tensorflow/serving
ENV MODEL_BASE_PATH /models
ENV MODEL_NAME classifier
COPY models/classifier /models/classifier
# Fix because base tf_serving_entrypoint.sh does not take $PORT env variable while $PORT is set by Heroku
# CMD is required to run on Heroku
COPY tf_serving_entrypoint.sh /usr/bin/tf_serving_entrypoint.sh
@ClementWalter
ClementWalter / batch_request_served_model.py
Created September 23, 2020 15:29
Perform a batch request onto a Tensorflow served model with docker
response = requests.post(
"http://localhost:8501/v1/models/classifier:predict",
json={
"signature_name": "serving_default", # can be omitted
"inputs": {
"image_bytes": [image.numpy().decode("utf-8") for image in image_bytes][:2], # batch request
},
},
)
tf.saved_model.save(
classifier,
export_dir="classifier/1",
signatures={
"serving_default": decode_and_serve,
"preprocessing": preprocessing,
},
)
@ClementWalter
ClementWalter / decode_and_serve.py
Created September 23, 2020 15:20
Signatures to decode a base64 image and serve inference
@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.string),))
def decode(image_bytes):
"""
Takes a base64 encoded image and returns the preprocessed input tensor ready for the inference
"""
# not working on GPU if tf.__version__ < 2.3, see https://github.com/tensorflow/tensorflow/issues/28007
with tf.device("/cpu:0"):
input_tensor = tf.map_fn(
lambda x: preprocessing(tf.io.decode_jpeg(contents=tf.io.decode_base64(x), channels=3))["output_0"],
image_bytes,
@ClementWalter
ClementWalter / preprocessing.py
Created September 23, 2020 15:15
Simple image preprocessing
@tf.function(input_signature=(tf.TensorSpec(shape=[None, None, 3], dtype=tf.uint8),))
def preprocessing(input_tensor):
output_tensor = tf.cast(input_tensor, dtype=tf.float32)
output_tensor = tf.image.resize_with_pad(output_tensor, target_height=224, target_width=224)
output_tensor = keras_applications.mobilenet.preprocess_input(output_tensor, data_format="channels_last")
return output_tensor