-
-
Save geerlingguy/ff3c3cbcf4416be2c0c1e0f836a8183d to your computer and use it in GitHub Desktop.
# Note: This will only work on Navi21 GPUs (6800/6900+). | |
# See: https://github.com/RadeonOpenCompute/ROCm/issues/1668#issuecomment-1043994570 | |
# Install Conda (latest from https://docs.conda.io/en/latest/miniconda.html#linux-installers) | |
wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh | |
bash Miniconda3-py39_4.12.0-Linux-x86_64.sh | |
# follow the prompts to install it, and run `conda` to make sure it's working. | |
# Install git and curl, and clone the stable-diffusion repo | |
sudo apt install -y git curl | |
cd Downloads | |
git clone https://github.com/CompVis/stable-diffusion.git | |
# Install dependencies and activate environment | |
cd stable-diffusion | |
conda env create -f environment.yaml | |
conda activate ldm | |
# Download Stable Diffusion weights | |
curl https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media > sd-v1-4.ckpt | |
# Symlink the weights into place | |
mkdir -p models/ldm/stable-diffusion-v1/ | |
ln -s -r sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt | |
# Install AMD ROCm support | |
wget https://repo.radeon.com/amdgpu-install/22.10/ubuntu/focal/amdgpu-install_22.10.50100-1_all.deb | |
sudo apt-get install ./amdgpu-install_22.10.50100-1_all.deb | |
sudo amdgpu-install --usecase=dkms,graphics,rocm,lrt,hip,hiplibsdk | |
# make sure you see your GPU by running rocm-smi | |
# Make AMD GPU work with ROCm | |
cd stable-diffusion/ | |
conda remove cudatoolkit -y | |
pip3 uninstall torch torchvision -y | |
# Install PyTorch ROCm | |
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.0 | |
pip3 install transformers==4.19.2 scann kornia==0.6.4 torchmetrics==0.6.0 | |
# Generate an image | |
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms |
Please see the note at the top; unfortunately ROCm only seems to work on the 6800/6900 XT right now :(
Ohh you are right! The 6700xt uses Navi22.
Thanks for the answer.
@clasen The "Killed" error is most likely due to low RAM. I've seen that error often when trying to run SD on Linux with 8 GB of RAM. I had to bump up my Ubuntu VM to 12 GB of RAM to get SD to run.
The model file is a compressed 4 GB, and it initially expands to occupy a lot of RAM (during decompression), before settling back to below-8 GB usage.
That said, even if you get more RAM, it may fail anyway due to the 6800/6900 XT requirement @geerlingguy mentioned. Sorry :)
@cmdr2 Thanks for your feedback, I think what you say makes sense, I was aware that it could be a problem.
I'm going to try to get more memory, I'm anxious to see how fast stable-diffusion can run on a rig with x8 6700xt.
Thanks,
To ensure pip doesn't reinstall the non-ROCm packages from the cache you may need to:
pip3 cache purge
before doing:
# Install PyTorch ROCm
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.0
pip3 install transformers==4.19.2 scann kornia==0.6.4 torchmetrics==0.6.0
Also, I used the nightly with ROCm 5.2 support by doing:
pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/rocm5.2/
@clasen If you put:
HSA_OVERRIDE_GFX_VERSION=10.3.0
in front of :
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
then it will run on a 6700XT. If it still doesn't work, you could try making a 256 by 256 image to see if it uses too much ram
Hi @kcnqwe,
thanks for your tip.
I tried it and the result is still the same (Killed).
It's probably related to the fact that I only have 8gb of RAM.
Hello! I'm trying to install this, but I'm getting this error:
Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:5 https://repo.radeon.com/amdgpu/22.10/ubuntu focal InRelease
Hit:6 https://repo.radeon.com/rocm/apt/5.1 ubuntu InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
amdgpu-lib32 : Depends: libdrm2-amdgpu:i386 but it is not installable
Depends: libdrm-amdgpu-amdgpu1:i386 but it is not installable
Depends: libllvm-amdgpu:i386 but it is not installable
Depends: libwayland-amdgpu-client0:i386 but it is not installable
Depends: libwayland-amdgpu-server0:i386 but it is not installable
Depends: libwayland-amdgpu-egl1:i386 but it is not installable
Depends: libxatracker2-amdgpu:i386 but it is not installable
Depends: libgbm1-amdgpu:i386 but it is not installable
Depends: libegl1-amdgpu-mesa:i386 but it is not installable
Depends: libegl1-amdgpu-mesa-drivers:i386 but it is not installable
Depends: libglapi-amdgpu-mesa:i386 but it is not installable
Depends: libgl1-amdgpu-mesa-glx:i386 but it is not installable
Depends: libgl1-amdgpu-mesa-dri:i386 but it is not installable
Depends: mesa-amdgpu-va-drivers:i386 but it is not installable
Depends: mesa-amdgpu-vdpau-drivers:i386 but it is not installable
openmp-extras : Depends: libstdc++-5-dev but it is not installable or
libstdc++-7-dev but it is not installable
Depends: libgcc-5-dev but it is not installable or
libgcc-7-dev but it is not installable
rocm-gdb : Depends: libpython3.8 but it is not installable
rocm-llvm : Depends: python but it is not installable
Depends: libstdc++-5-dev but it is not installable or
libstdc++-7-dev but it is not installable
Depends: libgcc-5-dev but it is not installable or
libgcc-7-dev but it is not installable
Recommends: gcc-multilib but it is not going to be installed
Recommends: g++-multilib but it is not going to be installed
xserver-xorg-amdgpu-video-amdgpu : Depends: xorg-video-abi-24 but it is not installable
E: Unable to correct problems, you have held broken packages.
Hello, there,
is there a way to use an GUI for the creation as well.Or does this workaround only work in the commandline.
perhaps macOS compatible script can help https://github.com/dylancl/stable-diffusion-webui-mps/blob/master/setup_mac.sh
I found them very similar about installing miniconda and PyTorch , but I don't have enough time to test this.
Hi @clasen, were you able to make it work with a 6700 XT? I've 32gb of RAM, so might give it a try.
Based @kcnqwe tip to HSA_OVERRIDE_GFX_VERSION=10.3.0 in front of python scripts/txt2img.py
Could this override have some risk of ruining the GPU?
Hi, thanks for this. When I run the final python code to generate the image I get the following errors...
Traceback (most recent call last):
File "scripts/txt2img.py", line 344, in <module>
main()
File "scripts/txt2img.py", line 240, in main
model = load_model_from_config(config, f"{opt.ckpt}")
File "scripts/txt2img.py", line 50, in load_model_from_config
pl_sd = torch.load(ckpt, map_location="cpu")
File "/home/renrut/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/renrut/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'A'.
just wondered if you have any pointers on this?
Update
This was caused by the weights link not working, switching to curl https://f004.backblazeb2.com/file/aai-blog-files/sd-v1-4.ckpt > sd-v1-4.ckpt
fixes it
Please change the file extension from .txt
to .sh
. It's a shell script, and GitHub will add syntax highlighting based on the extension.
Hi there, thanks for this toutorial.
I have a RIG and I want to try execute stable difusion inside the rig (with x8 AMD 6700xt)
GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU%
0 29.0c 29.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 1 30.0c 29.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 2 31.0c 29.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 3 28.0c 28.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 4 31.0c 29.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 5 28.0c 28.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 6 31.0c 28.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0% 7 30.0c 30.0W 1450Mhz 1074Mhz 77.65% manual 211.0W 0% 0%
After follow your instructions, it show me an unexplain error that says "Killed"
(ldm) root@Adam:~/amd/stable-diffusion# python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms Global seed set to 42 Loading model from models/ldm/stable-diffusion-v1/model.ckpt Global Step: 470000 LatentDiffusion: Running in eps-prediction mode Killed
I have 8GB of RAM.
thanks,
8GB of RAM? aahaahahaaa
Hi there, thanks for this toutorial.
I have a RIG and I want to try execute stable difusion inside the rig (with x8 AMD 6700xt)
GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU%
After follow your instructions, it show me an unexplain error that says "Killed"
I have 8GB of RAM.
thanks,