Skip to content

Instantly share code, notes, and snippets.

@ZEROF
Last active August 9, 2025 18:59
Show Gist options
  • Save ZEROF/819043a72b986dbe5f4aede6601222f3 to your computer and use it in GitHub Desktop.
Save ZEROF/819043a72b986dbe5f4aede6601222f3 to your computer and use it in GitHub Desktop.
Setting AI Home Lab with eGPU and UM890 Pro Minisforum.md

System Setup Guide

🧰 Hardware Specifications

  • Mini PC: UM890 PRO Minisforum
  • RAM: 96GB Crucial DDR5 (2x48GB) 5600MHz SODIMM - CL46 - CT2K48G56C46S5
  • Storage: M.2 NVMe Samsung SSD 980 PRO 1TB (Model: 5B2QGXA7)
  • GPU: PNY GeForce RTXβ„’ 5060 Ti 16GB ARGB Overclocked Triple Fan DLSS 4
  • eGPU Docking: GTBOX G-DOCK with OCuLink USB4 and Integrated 800W Huntkey Power Supply

πŸ“Œ Installation Tips

  • OS: Install Ubuntu 24.04 LTS desktop edition
  • HDMI: Use the Mini PC's integrated HDMI (do not connect NVIDIA GPU HDMI output)
  • Drivers: Enable Ubuntu non-free repository during install for all drivers
  • Post-Install: Run these commands:
    sudo apt update && sudo apt upgrade -y
    sudo reboot

πŸ§ͺ NVIDIA Driver Installation

πŸ”— Official Instructions

Install CUDA Toolkit 13.0 for Ubuntu 24.04

πŸ› οΈ Command Steps (at the time of writing, the instructions are)

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-13-0

πŸš€ Install drivers

sudo apt-get install -y nvidia-open

🧠 Open vs Proprietary Drivers

For NVIDIA Grace Hopper or Blackwell platforms, only open-source drivers are supported:

"For cutting-edge platforms such as NVIDIA Grace Hopper or NVIDIA Blackwell, you must use the open-source GPU kernel modules. The proprietary drivers are unsupported on these platforms."
Source


🧠 Ollama Installation

curl -fsSL https://ollama.com/install.sh | sh

πŸ“¦ Output

>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service β†’ /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.

🐳 Docker & Portainer Setup

🐳 Install Docker

Ubuntu Docker Installation Guide

🏠 Optional: Portainer (Admin Panel)

Portainer Installation Guide

πŸ“š OpenWebUI Quick Start

Documentation

🧰 NVIDIA Docker Drivers

Installation Guide


πŸ“Œ Next Steps

  1. Power off the system
  2. Connect eGPU using OCuLink cable, power on dock station and mini PC
  3. If you see the login screen:
    • Log in and run:
      nvidia-smi
      (If output appears, everything is working)
  4. Run Ollama and open WebUI in Docker

πŸš€ Run Open WebUI

docker run -d --restart always --network=host \
  -e OLLAMA_BASE_URL=http://localhost:11434 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  ghcr.io/open-webui/open-webui

🌐 Access Open WebUI

Open your browser and go to:

http://localhost:8080

βœ… Final Step

Install at least one model:

ollama pull mistral-small3.2

Now you can use Open WebUI and enjoy! πŸŽ‰


πŸ” Notes

  • Ensure the eGPU is properly connected before powering on
  • Verify nvidia-smi output confirms GPU detection
  • Use OLLAMA_BASE_URL for Ollama service compatibility
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment