Skip to content

Instantly share code, notes, and snippets.

@codeliger
Last active June 23, 2025 19:29
Show Gist options
  • Save codeliger/6b35fcce1bc534b4e8d3dd4cf3866136 to your computer and use it in GitHub Desktop.
Save codeliger/6b35fcce1bc534b4e8d3dd4cf3866136 to your computer and use it in GitHub Desktop.
How to use Ollama with a 9070 XT on Arch Linux

Clone Ollama

git clone [email protected]:ollama/ollama.git
cd ollama

Update to the release/commit pointing to llama.cpp that supports your card.

make -f Makefile.sync checkout
make -f Makefile.sync sync

Compile and install Ollama - ROCm specific libraries

cmake --preset "ROCm 6" -DCMAKE_INSTALL_PREFIX=/usr/local -DAMDGPU_TARGETS="gfx1201" -B build

NOTE: use /usr/local because the linux ollama installer will install there

 cmake --build build

NOTE: I am manually overriding the DAMDGPU_TARGETS to only build my gpu (gfx1201) (which is a way faster build process)

NOTE: You do not need to specify DAMDGPU_TARGETS if you specified the preset in the first cmake step it will include all rocm 6 gpus.

cd build
sudo make install

Note: Because you used the /usr/local install prefix this will install the libraries to the correct location /usr/local/lib/ollama/* since ollama looks for the libs relatively (on linux) at ../lib/ollama from the executable location of /usr/local/bin/ollama.

Build and install latest verison of ollama

cd ..
go build .
sudo systemctl stop ollama
sudo cp ./ollama /usr/local/bin/ollama
sudo systemctl start ollama
sudo systemctl status ollama

Result:

...
msg="amdgpu is supported" gpu=GPU-********* gpu_type=gfx1201
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment