- Mini PC: UM890 PRO Minisforum
- RAM: 96GB Crucial DDR5 (2x48GB) 5600MHz SODIMM - CL46 - CT2K48G56C46S5
- Storage: M.2 NVMe Samsung SSD 980 PRO 1TB (Model: 5B2QGXA7)
- GPU: PNY GeForce RTXβ’ 5060 Ti 16GB ARGB Overclocked Triple Fan DLSS 4
- eGPU Docking: GTBOX G-DOCK with OCuLink USB4 and Integrated 800W Huntkey Power Supply
- OS: Install Ubuntu 24.04 LTS desktop edition
- HDMI: Use the Mini PC's integrated HDMI (do not connect NVIDIA GPU HDMI output)
- Drivers: Enable Ubuntu non-free repository during install for all drivers
- Post-Install: Run these commands:
sudo apt update && sudo apt upgrade -y sudo reboot
Install CUDA Toolkit 13.0 for Ubuntu 24.04
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-13-0sudo apt-get install -y nvidia-openFor NVIDIA Grace Hopper or Blackwell platforms, only open-source drivers are supported:
"For cutting-edge platforms such as NVIDIA Grace Hopper or NVIDIA Blackwell, you must use the open-source GPU kernel modules. The proprietary drivers are unsupported on these platforms."
Source
curl -fsSL https://ollama.com/install.sh | sh>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service β /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.
Ubuntu Docker Installation Guide
- Power off the system
- Connect eGPU using OCuLink cable, power on dock station and mini PC
- If you see the login screen:
- Log in and run:
(If output appears, everything is working)
nvidia-smi
- Log in and run:
- Run Ollama and open WebUI in Docker
docker run -d --restart always --network=host \
-e OLLAMA_BASE_URL=http://localhost:11434 \
-v open-webui:/app/backend/data \
--name open-webui \
ghcr.io/open-webui/open-webuiOpen your browser and go to:
http://localhost:8080
Install at least one model:
ollama pull mistral-small3.2Now you can use Open WebUI and enjoy! π
- Ensure the eGPU is properly connected before powering on
- Verify
nvidia-smioutput confirms GPU detection - Use
OLLAMA_BASE_URLfor Ollama service compatibility