| Property | Value |
|---|---|
| Image | rocm/pytorch:rocm6.4.4_ubuntu24.04_py3.12_pytorch_release_2.7.1 |
| Size | 69.6 GB |
| ROCm Version | 6.4.4 |
| PyTorch Version | 2.7.1 |
| Python Version | 3.12 |
| Base OS | Ubuntu 24.04 |
| Component | Details |
|---|---|
| GPU | AMD Radeon RX 7700 XT (Navi 32) |
| GFX Version | gfx1101 |
| Host OS | Ubuntu 25.04 |
| Host ROCm | 6.3.0 (container brings its own) |
For RDNA3 GPUs (RX 7000 series), you need to set:
HSA_OVERRIDE_GFX_VERSION=11.0.1docker run --rm \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--ipc=host \
-e HSA_OVERRIDE_GFX_VERSION=11.0.1 \
rocm/pytorch:rocm6.4.4_ubuntu24.04_py3.12_pytorch_release_2.7.1 \
python3 -c "import torch; print(torch.cuda.is_available())"PyTorch: 2.7.1+git99ccf24
CUDA available: True
Device count: 1
Device name: AMD Radeon RX 7700 XT
GPU compute (matrix multiplication) verified working.
- The container's ROCm version (6.4.4) doesn't need to match the host ROCm version
- The host kernel driver (amdgpu) is what matters for compatibility
- Image size is typical for ROCm + PyTorch (~70GB vs ~15-20GB for NVIDIA equivalents)
- This image works as a base layer for AI/ML workloads (Whisper, etc.)
This configuration should work for other RDNA3 GPUs:
- RX 7900 XTX / 7900 XT / 7900 GRE
- RX 7800 XT
- RX 7700 XT
- RX 7600 XT / 7600
All use gfx1100/gfx1101/gfx1102 and benefit from HSA_OVERRIDE_GFX_VERSION=11.0.1.
Tested: December 2025
This gist was generated by Claude Code. Please verify any information before relying on it.