Skip to content

Instantly share code, notes, and snippets.

@helloyanis
Last active April 21, 2025 08:04
Show Gist options
  • Save helloyanis/a25bda9e2c389da86ce1063320bbe261 to your computer and use it in GitHub Desktop.
Save helloyanis/a25bda9e2c389da86ce1063320bbe261 to your computer and use it in GitHub Desktop.
Tutorial : Use OpenVINO to generate images on Intel hardware

Have you tried to generate images, but it doesn't work because you have Intel hardware?

Are you confused with all the technical terms and just want to create images?

Well, this noob-friendly tutorial is here for you!

This tutorial will guide you through all the steps needed to generate images locally using any model for Stable Diffusion using OpenVINO GenAI.

But first of all, let's define some terms.

OpenVINO is an AI framework made by Intel to make models work on Intel hardware in the most optimized way.

GenAI is something that uses OpenVINO and does all of the heavy lifting. We'll use this in this tutorial.

Before we start, let me also point out that the hardware you have will greatly impact the speed at which images will be generated. My laptop with an Intel Core Ultra 7 with integrated graphics takes ~30s to generate a 512x512 image. Image generation takes a lot of power so be sure to have decent hardware, or lots of patience!

Also, this is a rapidly evolving environment with lots of new versions released regularly! This guide may not be completely up-to-date, so always check for newer versions.

Finally, OpenVINO is made to work with Intel hardware. If you don't have Intel hardware (If you don't know, check if there is an "Intel" branded sticker on your PC!), then this guide will 99% not work for you.

Ok, let's start now!


Step 1 : Install the required programs

For this you'll need to install the following programs :

  • Around 100~150GB of storage space (depending on the models you use)
  • Intel hardware (an Intel CPU, GPU or NPU), with its latest drivers. Use this site to automatically check for driver updates.
  • Python (mandatory), make sure to check the box "Add to PATH" in the installer!

bildo

πŸ’‘ Note : At the time of writing this, Python 3.12.10 is the latest supported version, but it can change quickly. If errors occur in the next steps like Failed to build wheel, try changing the Python version

  • Git (mandatory)

bildo

Then, check if everything is installed correctly by opening a terminal (in Windows search, look for Terminal). Then paste the following commands :

python --version
git --version

If you do NOT get an error (text in red) when inputting this, then congrats, you're ready yo move to the next step!


Step 2 : Prepare your environnement

Make a new folder where you would like to put everything related to AI image generation (your models, python scripts we'll get in next steps, etc).

Next, we'll need to open the terminal to this location.

In your file explorer, open your folder, right-click on an empty spot and click Open in terminal. If you don't see this option, you can also type the following command in your terminal instead :

cd "C:/Path/to/your/AI/folder"

⚠️ Make sure to put the actual path of your folder within the quotation marks!

πŸ’‘ Tip : To open the terminal on Windows, right-click on the start menu and select Terminal bildo

Then, in the same terminal window, we'll install OpenVINO GenAI!


Step 3 : Install OpenVINO GenAI

Here's the official guide on how to install it in case mine does not work anymore.

So, in that same terminal window, run the command :

Get-ExecutionPolicy
If the command returns either Default, Restricted, or Undefined, CLICK HERE to get the extra steps needed to update the execution policy. If it returns either AllSigned, RemoteSigned, Bypass or Unrestricted, keep scrolling to the next step.
So, we'll need to open a new **administrator** terminal. Don't close the one you already have open, and right-click on the start menu, then select `Terminal (administrator)`

bildo

Then run the following command in this new terminal window :

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine

If there is no error, run this command again :

Get-ExecutionPolicy

You should see it returns RemoteSigned. That's great! We just changed the execution policy!

Now CLOSE the administrator terminal (the one you just typed the commands in), and go back to your other window which is the regular terminal.

In your terminal window, run the following :

python -m venv openvino_env
openvino_env\Scripts\activate
python -m pip install --upgrade pip
python -m pip install openvino-genai
python -m pip install optimum-intel@git+https://github.com/huggingface/optimum-intel.git
echo "βœ… Install finished!"

You can paste them all at once, by copying the code block above and right clicking in your terminal window. If it asks you "Do you want to run software from this untrusted publisher?", hit the R key, then Enter, to run it. Wait for it to load, it can take a while depending on your internet connection!

What we just did is :

  • Create a virtual environment for Python (to not change the config of your whole system, only change the config of your virtual environment)
  • Activate that virtual environment (indicated by the green (openvino_env) in the terminal)
  • Install / upgrade pip (Python's dependency manager) and OpenVINO GenAI

To check if everything was set up correctly, run the following command in the terminal :

python -c "from openvino import Core; print(Core().available_devices)"

You should see something like ['CPU', 'GPU'] (the list itself may vary depending on your hardware, but if you get an error, then either you did something wrong or this guide is outdated. If that's the case please leave a comment with the error message and don't continue any further because the next steps will no longer work.)

So, we're ready to download the model!


Step 4 : Download a model

To generate images, we need a model. It's basically something that knows how everything looks.

For most images, Stable Diffusion works great, but if you want specific styles you can always find other ones at https://huggingface.co/

For this guide, we'll use the Stable Diffusion XL model, but if you want to use another one, you can always note the model URL at huggingface, and replace the one used in this guide with your own.

So first, we need to download the model. For that, still in your terminal window, paste in this command :

git clone "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0"

It will take a long time to load, make sure you have a good internet connection! Sometimes it will stay at 100% but you still need to wait until it lets you type in the terminal again, which means it's done.

πŸ’‘ Tip : It seems to fail after an hour of loading. If that happens, you can try running it with git lfs clone :

git clone "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0"

After this, a new folder inside your AI folder should have been created with the name of the model you have downloaded.


Step 5 : Convert and optimize the model for your hardware

We'll then convert the model to make sure it runs on your hardware!

We'll first install a few packages : Diffusers, which is required for converting, and accelerate, which will make the operation faster.. Run the following command :

pip install diffusers accelerate

Then, take a look at the following command, but don't put it in the terminal yet!

optimum-cli export openvino --model "./stable-diffusion-xl-base-1.0/" --weight-format fp16 "./stable-diffusion-xl-base-1.0-openvino" --task stable-diffusion

This is a bit complex, so the main thing you need to know is you need to replace "./stable-diffusion-xl-base-1.0/" with your model's folder, and "./stable-diffusion-xl-base-1.0-openvino" is the converted model folder. Don't create it yet! It will be made for you.

πŸ’‘ Tip! You can start typing the name of the folder then press TAB to autocomplete and cycle through all the files and folders beginning with what you typed!

You can enter this command, or, even better, you can optimize it by using INT8 hybrid quantization (WHatever that means, just know it's faster haha). If you plan to generate lots of images, you should enter the command below instead. It takes a longer time, around an hour on my hardware but just happens once then your images will be generated faster.

pip install nncf
optimum-cli export openvino --model "./stable-diffusion-xl-base-1.0/" --weight-format int8 --dataset conceptual_captions "./stable-diffusion-xl-base-1.0-optimized" --task stable-diffusion

So pick one of the commands above and put them in the terminal!


Step 6 : Create your Python file to create your image

Here are the docs for this part in case this guide gets outdated

Download the files called text2image.py and requirements.txt that you can find below this guide and open text2image.py with your code editor you installed earlier. Put them in your AI folder! (Drag and drop the files into the window, or File > Open in Visual Studio Code)

You can edit the file to your liking, mainly the following points :

  • On line 24, device can be set to either one of the values you had when you ran the code snippet at the end of step 3. So it can be GPU" or "CPU". You want to pick GPU if available. CPU will also work but is generally slower.
  • On lines 38 and 39, width and height will change the size of the image
  • On line 40, num_inference_steps is how many times the AI will improve on its image. Higher is better quality but takes longer. Low numbers can lead to blurry images.
  • On line 28, seed is currently set to a random number. When you use the same seed and the same prompt multiple times, the output will always be the same. If you use a seed close to the one you used before, the image will be simillar. You can change it to any number you want, or keep it as-is to make it random every time.

Then you're ready to go! Don't forget to save your file with ctrl+S and go on to the last step!


Step 7 : Write your prompt!

We're almost there! Now we'll go back into the previously open terminal window, and run the following script :

pip install -r "requirements.txt"

This will install the dependencies required for OpenVINO to work.

Then finally, take a look at this command :

python text2image.py "./stable-diffusion-xl-base-1.0-optimized" "fox, sitting, forest background, night sky, hand-drawn" "bad quality, low res"

In this command, there are 3 elements in quotation marks :

  • The path to your converted or optimized model, replace it with the one you got in step 5.
  • The prompt, what you want to see in the image. You should build it with keywords split by a , to tell exactly the AI what you want! Use parentheses to emphasize on a term if the AI doesn't seem to take it into account, for example (night sky)
  • The negative prompt, what you DON'T want to see in the image. The same thing from the prompt apply.

So, craft the perfect prompt, hit enter and wait for your image!


πŸŽ‰ Congratulations! You have made your first image with OpenVINO!!

πŸ’‘ Tip : If you want to redo the previous command, for example to change something in the prompt, press the up arrow in your terminal.

πŸ’‘ Tip : If you close your terminal and want to come back to create images, run the following command in a new terminal window :

cd "C:/Path/to/your/AI/folder"
openvino_env\Scripts\activate
python text2image.py "./stable-diffusion-xl-base-1.0-optimized" "fox, sitting, forest background, night sky, hand-drawn" "bad quality, low res"

If this guide was helpful to you, don't forget to give it a star by clicking on the star icon at the top right of this page so it gets referenced better by search engines!

Have a great day!

🦊

--extra-index-url https://download.pytorch.org/whl/cpu
--extra-index-url https://storage.openvinotoolkit.org/simple/wheels/pre-release
--extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly
openvino-tokenizers~=2025.2.0.0.dev
optimum-intel @ git+https://github.com/huggingface/optimum-intel.git@main
numpy<2.0.0; sys_platform == 'darwin'
einops==0.8.1 # For Qwen
transformers_stream_generator==0.0.5 # For Qwen
diffusers==0.32.2 # For image generation pipelines
timm==1.0.15 # For exporting InternVL2
torchvision # For visual language models
transformers>=4.43 # For Whisper
hf_transfer # for faster models download, should used with env var HF_HUB_ENABLE_HF_TRANSFER=1
colorama
#!/usr/bin/env python3
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import openvino_genai
from PIL import Image
from colorama import Fore, Back, Style
import time
import random
def main():
parser = argparse.ArgumentParser()
parser.add_argument('model_dir')
parser.add_argument('prompt')
parser.add_argument('negative_prompt')
args = parser.parse_args()
device = 'GPU' # CPU can be used as well
pipe = openvino_genai.Text2ImagePipeline(args.model_dir, device)
# Set the seed for reproducibility
seed = random.randint(0, 2**32 - 1)
print(Fore.GREEN + f'Using seed: {seed}')
print(Fore.CYAN + f'Prompt: {args.prompt}')
print(Fore.YELLOW + f'Negative prompt: {args.negative_prompt}')
print(Style.RESET_ALL)
print('Generating image...')
image_tensor = pipe.generate(
args.prompt,
negative_prompt=args.negative_prompt,
width=512,
height=512,
num_inference_steps=45,
num_images_per_prompt=1,
rng_seed=seed)
image = Image.fromarray(image_tensor.data[0])
filename = f"image_{int(time.time())}_{int(seed)}.bmp"
image.save(filename)
print(Fore.GREEN + f'Image saved as {filename}')
print(Style.RESET_ALL)
if '__main__' == __name__:
main()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment