Use the latest CUDA version supported by PyTorch: https://pytorch.org/get-started/locally/
CUDA Downloads: https://developer.nvidia.com/cuda-toolkit-archive
Note: CUDA may require NVIDIA App, launch GeForce Experience to upgrade to NVIDIA App. Also CUDA install may require a reboot.
# Validate active CUDA version
nvcc --version
Use the latest version of Python supported by PyTorch/xformers Currently: 3.12.10 (https://www.python.org/downloads/release/python-31210/) Why this version? because cp312 is the latest on https://download.pytorch.org/whl/xformers/
Note: Track the folder this installs to for later
Default 3.12 Path: C:\Users\<USERNAME>\AppData\Local\Programs\Python\Python312
Install MSYS2 from https://www.msys2.org/
Run these commands in the MSYS2 terminal, agreeing to defaults for both commands:
pacman -S mingw-w64-x86_64-gcc
pacman -S mingw-w64-x86_64-toolchain
Launch: "Edit environment variables for your account" - Edit "Path" Entry - Add C:\msys64\mingw64\bin - Remove any old MinGW paths if they exist
# Clear old paths if they exist
Get-Command gcc, g++ | Remove-Item -Force
# Validate C:\msys64\mingw64\bin Path
Get-Command gcc
Get-Command g++
# Clone Repo
git clone https://github.com/ostris/ai-toolkit.git
cd ai-toolkit
# Use your Python 3.12 path
C:\Users\<USERNAME>\AppData\Local\Programs\Python\Python312\python -m venv venv
# Activate venv, must run this in all new shells which want to install/load dependencies via python/pip
# If no errors occur on the following pip commands you wont need to rerun this activate command
.\venv\Scripts\activate
# Replace these lines in requirements.txt before proceeding
# torch
# torchvision
# torchao
# transformers>=4.49.0
# pytorch-wavelets>=1.3.0
pip install --no-cache-dir torch torchvision --index-url https://download.pytorch.org/whl/cu128
pip install -r requirements.txt
pip install --upgrade setuptools packaging
.\venv\Scripts\python.exe -m pip install --upgrade pip
cd ui
# Note: Requires Nodejs > v18 , install if not available https://nodejs.org/en/download
npm run build_and_start
Use ai-toolkit from http://localhost:8675
Tip: Set low_vram to true for most lora training scenarios (unless you have 24GB of free VRAM on your GPU )