| Component | Version |
|---|---|
| Board | NVIDIA Jetson Orin |
| JetPack | 6.2.2 (L4T R36.5.0) |
| Kernel | 5.15.185-tegra |
| OS | Ubuntu 22.04.5 LTS (aarch64) |
| Docker | with iptables-legacy needed |
| OpenShell | 0.0.13 |
| Node.js | v22.22.1 |
The stock NemoClaw installer (curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash)
hits three issues on Jetson Orin that must be fixed before running it.
The OpenShell cluster image uses iptables v1.8.10 (nf_tables) by default.
The Tegra 5.15 kernel does not have full nf_tables support, causing k3s (inside
the cluster container) to panic:
iptables v1.8.10 (nf_tables): RULE_INSERT failed (No such file or directory)
Extension conntrack revision 0 not supported, missing kernel module?
Fix: Patch the cluster image to use iptables-legacy before running the installer.
Without br_netfilter, bridged pod-to-pod traffic inside k3s bypasses iptables.
This breaks Kubernetes ClusterIP service routing, specifically DNS resolution.
The sandbox pod crashes in a loop with:
failed to connect to OpenShell server: dns error: Temporary failure in name resolution
Root cause: DNAT rewrites the destination on outbound packets, but without
br_netfilter, reply packets traverse the Linux bridge at L2 and skip the
reverse-NAT in conntrack. The client sees replies from the wrong source IP and
drops them.
Fix: Load br_netfilter and enable bridge-nf-call-iptables.
If using local Ollama for inference, it defaults to 127.0.0.1:11434 which is
unreachable from inside the OpenShell sandbox container. The onboarding fails at
step [5/7] with:
containers cannot reach http://host.openshell.internal:11434
Fix: Configure Ollama to listen on 0.0.0.0:11434.
sudo apt install curlRemove any old Docker packages:
sudo apt-get remove docker docker-engine docker.io containerd runcAdd Docker's official GPG key and repository:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugincurl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkitConfigure the toolkit for all runtimes:
sudo nvidia-ctk runtime configure --runtime=docker
sudo nvidia-ctk runtime configure --runtime=containerd
sudo nvidia-ctk runtime configure --runtime=crio
sudo systemctl restart docker
sudo systemctl restart containerd
sudo systemctl restart criosudo chmod 666 /var/run/docker.sock
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp dockerThis is needed so that the NVCC compiler and GPU are available during
docker build operations. Edit /etc/docker/daemon.json:
sudo nano /etc/docker/daemon.jsonSet the contents to:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}Then reconfigure and restart:
sudo nvidia-ctk runtime configure --runtime=docker
sudo nvidia-ctk runtime configure --runtime=containerd
sudo nvidia-ctk runtime configure --runtime=crio
sudo systemctl restart docker
sudo systemctl restart containerd
sudo systemctl restart crio
sudo systemctl enable docker.service
sudo systemctl enable containerd.serviceRun these steps in order before launching the NemoClaw installer.
sudo modprobe br_netfilter
sudo sysctl -w net.bridge.bridge-nf-call-iptables=1Make it persistent across reboots:
echo br_netfilter | sudo tee /etc/modules-load.d/k8s.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' | sudo tee /etc/sysctl.d/k8s.confThe installer pulls ghcr.io/nvidia/openshell/cluster:0.0.13 and uses it
directly. We patch it in-place so the installer picks up the fixed version.
IMAGE_NAME="ghcr.io/nvidia/openshell/cluster:0.0.13"
docker run --entrypoint sh --name fix-iptables "$IMAGE_NAME" -c '
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
ln -sf /usr/sbin/iptables-legacy /usr/sbin/iptables
ln -sf /usr/sbin/ip6tables-legacy /usr/sbin/ip6tables
iptables --version
'
docker commit \
--change 'ENTRYPOINT ["/usr/local/bin/cluster-entrypoint.sh"]' \
fix-iptables "$IMAGE_NAME"
docker rm fix-iptablesYou should see iptables v1.8.10 (legacy) in the output.
Important: If the installer re-pulls the image (e.g., on a retry after cleanup), this patch is lost. Re-run this step before each installer attempt.
Skip this step if using NVIDIA cloud inference (build.nvidia.com) instead of local Ollama.
sudo mkdir -p /etc/systemd/system/ollama.service.d
echo -e '[Service]\nEnvironment="OLLAMA_HOST=0.0.0.0:11434"' \
| sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl restart ollamaVerify:
ss -tlnp | grep 11434
# Should show 0.0.0.0:11434, NOT 127.0.0.1:11434curl -fsSL https://www.nvidia.com/nemoclaw.sh | bashDuring onboarding, you will be prompted for:
- Sandbox name: accept the default
my-assistantor choose your own - Inference option: choose
2for local Ollama - Ollama model: choose from available models (e.g.,
qwen3.5:2b)
The Docker image build at step [3/7] takes ~10 minutes on Orin (first run, no cache). Subsequent runs are faster due to layer caching.
After successful onboarding, the dashboard should be accessible at:
http://127.0.0.1:18789/
The OpenShell gateway runs at:
https://127.0.0.1:8080
docker exec openshell-cluster-nemoclaw kubectl get pods -n openshelldocker exec openshell-cluster-nemoclaw kubectl logs -n openshell my-assistantdocker exec openshell-cluster-nemoclaw kubectl run dns-test \
--namespace=openshell \
--image=rancher/mirrored-library-busybox:1.37.0 \
--restart=Never \
-- nslookup openshell.openshell.svc.cluster.local
sleep 10
docker exec openshell-cluster-nemoclaw kubectl logs -n openshell dns-test
docker exec openshell-cluster-nemoclaw kubectl delete pod -n openshell dns-testdocker exec openshell-cluster-nemoclaw iptables --version
# Must say "(legacy)", NOT "(nf_tables)"lsmod | grep br_netfilter
cat /proc/sys/net/bridge/bridge-nf-call-iptables
# Must output: 1# Stop and remove everything
source ~/.bashrc
nemoclaw destroy 2>/dev/null
docker rm -f openshell-cluster-nemoclaw 2>/dev/null
docker volume prune -f
# Then re-run from Step 2 above