This repository outlines what I have done in order to get Stable Diffusion running on a MacBook Pro with a M2 chip.
Exact Specs:
Key | Value |
---|---|
Model | MacBook Pro |
Screen | 14" |
Year | 2023 |
Chip | Apple M2 Max |
Memory | 32 GB |
macOS | Ventura 13.2 |
I'll try to be as exhaustive as possible but, at a high level, you're going to want most of the following installed as a generic base before proceeding too much further.
- Xcode Commandline Tools
- run
xcode-select --install
- run
- Homebrew
- run
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
- run
- PyEnv
brew install openssl readline sqlite3 xz zlib
brew install pyenv
- Add updater plugin
- Install latest stable version of python with pyenv (3.10 at time of writing has best balance of compatibility and performance enhancements)
pyenv install 3.10.9
- Set as the global python interpreter to use
pyenv global 3.10.9
pyenv init
and then follow prompt to configure your shell properly
We're going to install the invoke-ai/InvokeAI distribution of Stable Diffusion.
Create directory structure.
export INVOKEAI_ROOT=~/invokeai
mkdir -p ${INVOKEAI_ROOT}
cd ${INVOKEAI_ROOT}
Setup a virtual environment, activate it, and prep for installation.
python -m venv .venv --prompt InvokeAI
source .venv/bin/activate
python -m pip install --upgrade pip
Before we go further, ensure you're in the environment and running the correct version of python (MacOS ships with an old version, 3.9.6 at time of writing).
➜ python --version
Python 3.10.9
Install InvokeAI into the activated virtual environment and then refresh it so dedicated commands become available.
pip install InvokeAI --use-pep517
deactivate && source ${INVOKEAI_ROOT}/.venv/bin/activate
Finally run the invokeai-configure
command and follow the prompts to finish the installation.
invokeai-configure
We can also confirm that the environment is configured to use MPS
(ie. Apple Silicon GPU acceleration) by running the following from within a python shell.
See the official Accelerated PyTorch training on Mac page on the Apple website for more details.
import torch
if torch.backends.mps.is_available():
mps_device = torch.device("mps")
x = torch.ones(1, device=mps_device)
print (x)
else:
print ("MPS device not found.")
You may see a warning around aten::nonzero
but the important thing to check is that the following line is printed.
tensor([1.], device='mps:0')
For bleeding edge updates, you can optionally install pytorch and the necessary libraries from the nightly builds.
pip install --upgrade --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
Once all the models are downloaded your directory structure inside INVOKEAI_ROOT
should looks similar to the below. I've downloaded everything... so it's about 40GB of models on disk.
➜ tree -L 3 $INVOKEAI_ROOT
├── configs
│ ├── INITIAL_MODELS.yaml
│ ├── models.yaml.example
│ └── stable-diffusion
│ ├── v1-finetune.yaml
│ ├── v1-finetune_style.yaml
│ ├── v1-inference.yaml
│ ├── v1-inpainting-inference.yaml
│ ├── v1-m1-finetune.yaml
│ └── v2-inference-v.yaml
├── embeddings
├── invokeai.init
├── models
│ └── diffusers
│ ├── models--dreamlike-art--dreamlike-diffusion-1.0
│ ├── models--dreamlike-art--dreamlike-photoreal-2.0
│ ├── models--runwayml--stable-diffusion-inpainting
│ ├── models--runwayml--stable-diffusion-v1-5
│ ├── models--stabilityai--sd-vae-ft-mse
│ └── models--stabilityai--stable-diffusion-2-1
├── text-inversion-data
└── text-inversion-training-data
14 directories, 9 files
invokeai --precision=float32 --web
You'll then see a series of print statements including, hopefully, a reference to the use of mps
.
# we can't use half precision unfortunately so specify full
➜ invokeai --precision=float32 --web
* Initializing, be patient...
>> Initialization file ~/invokeai/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0
>> InvokeAI runtime directory is "~/invokeai"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type mps
Assuming the above worked fine we can add the environment variable INVOKEAI_ROOT
to our shell so that we don't need to remember this.
echo "INVOKEAI_ROOT=~/invokeai" >> ~/.zprofile