Skip to content

Instantly share code, notes, and snippets.

@ShawnHymel
Created July 2, 2024 18:55
Show Gist options
  • Select an option

  • Save ShawnHymel/a0dcef36e6724bdc3781b09555a51c67 to your computer and use it in GitHub Desktop.

Select an option

Save ShawnHymel/a0dcef36e6724bdc3781b09555a51c67 to your computer and use it in GitHub Desktop.
tflite-runtime example

To perform object detection inference using a TensorFlow Lite model (.tflite) on a JPG image with tflite-runtime, you need to follow several steps including installation of the necessary packages, loading the model, preprocessing the input image, running inference, and handling the output. Here's a comprehensive guide:

1. Install tflite-runtime and Image Processing Library

You'll need to install tflite-runtime and Pillow for image processing. If you haven't installed these, you can do so using pip:

pip install tflite-runtime pillow

2. Prepare Your Model and Image

Ensure you have a trained TensorFlow Lite model file (.tflite) and a JPG image ready for detection.

3. Load the TensorFlow Lite Model

You'll need to load your TFLite model and prepare the interpreter:

import tflite_runtime.interpreter as tflite
from PIL import Image
import numpy as np

# Load TFLite model and allocate tensors.
interpreter = tflite.Interpreter(model_path='path_to_your_model.tflite')
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

4. Prepare the Image

You need to preprocess your image to match the input requirements (size, scaling) of the model:

# Load and preprocess an image
image = Image.open('path_to_your_image.jpg').resize(
    (input_details[0]['shape'][2], input_details[0]['shape'][1])
)

# Convert the image to numpy array
input_data = np.expand_dims(np.array(image), axis=0).astype(np.float32)

# Normalize the image if required by your model
input_data = input_data / 255.0

5. Perform Inference

Set the processed image as the input to the model and run the interpreter:

# Set the tensor to point to the input data to be inferred
interpreter.set_tensor(input_details[0]['index'], input_data)

# Run the inference
interpreter.invoke()

6. Handle the Output

The output details depend on your model. Typically, for object detection, you'll have boxes, classes, and scores:

# Retrieve detection results
boxes = interpreter.get_tensor(output_details[0]['index'])[0]  # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0]  # Class index of detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0]  # Confidence of detected objects

# Print detected objects with confidence higher than a threshold
threshold = 0.5
for i in range(len(scores)):
    if scores[i] > threshold:
        print(f"Detected object {int(classes[i])} at {boxes[i]} with confidence {scores[i]}")

Notes:

  • Model Input/Output: Make sure to check your model's expected input dimensions, data type, and output format. Adjust the image preprocessing and output processing accordingly.
  • Performance: tflite-runtime is optimized for running on various devices, including low-power edge devices.
  • Edge Devices: If deploying on an edge device, ensure your device's environment supports the necessary installations.

This guide should enable you to perform object detection on a JPG image using a TensorFlow Lite model with tflite-runtime.

@freespace
Copy link
Copy Markdown

Thank you for this! Way more useful than Google's own documentation.

@ShawnHymel
Copy link
Copy Markdown
Author

Thank you for this! Way more useful than Google's own documentation.

Glad it helped!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment