Skip to content

Instantly share code, notes, and snippets.

@codeliger
Last active April 5, 2025 17:09
Show Gist options
  • Save codeliger/6b35fcce1bc534b4e8d3dd4cf3866136 to your computer and use it in GitHub Desktop.
Save codeliger/6b35fcce1bc534b4e8d3dd4cf3866136 to your computer and use it in GitHub Desktop.
How to use Ollama with a 9070 XT on Arch Linux

Update to linux kernel to 6.14.*

The Linux Kernel comes with a video card kernel driver called AMDGPU which only gets updated when you upgrade the kernel. You may be able to use the non-free closed source drivers without updating the kernel, but I haven't tried it. According to search engine results you need kernel 6.14 to get AMDGPU that has proper support for the 9070/XT.

  • Edit /etc/pacman.conf:

Uncomment this section:

[core-testing]
Include = /etc/pacman.d/mirrorlist
  • Save

sudo pacman -Sy

  • Update linux kernel to 6.14.* so that the "AMDGPU" drivers have the latest 9070 XT support:

sudo pacman -S linux linux-headers

  • Ensure whatever boot manager you have installed detects and adds the new linux kernel to the boot menu.
  • Reboot

Clone Ollama

git clone [email protected]:ollama/ollama.git
cd ollama

Update to the release/commit pointing to llama.cpp that supports your card.

UPDATE the Makefile.sync to a commit with support. (9070 XT):

FETCH_HEAD=5dec47dcd411fdf815a3708fd6194e2b13d19006

make -f Makefile.sync checkout
make -f Makefile.sync sync

Compile and install Ollama - ROCm specific libraries

cmake --preset "ROCm 6" -DCMAKE_INSTALL_PREFIX=/usr/local -DAMDGPU_TARGETS="gfx1201" -B build

NOTE: use /usr/local because the linux ollama installer will install there

 cmake --build build

NOTE: I am manually overriding the DAMDGPU_TARGETS to only build my gpu (gfx1201) (which is a way faster build process)

NOTE: You do not need to specify DAMDGPU_TARGETS if you specified the preset in the first cmake step it will include all rocm 6 gpus.

cd build
sudo make install

Note: Because you used the /usr/local install prefix this will install the libraries to the correct location /usr/local/lib/ollama/* since ollama looks for the libs relatively (on linux) at ../lib/ollama from the executable location of /usr/local/bin/ollama.

Build and install latest verison of ollama

cd ..
go build .
sudo systemctl stop ollama
sudo cp ./ollama /usr/local/bin/ollama
sudo systemctl start ollama
sudo systemctl status ollama

Result:

...
msg="amdgpu is supported" gpu=GPU-********* gpu_type=gfx1201
...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment