Skip to content

Instantly share code, notes, and snippets.

@Fiooodooor
Last active May 19, 2025 01:43
Show Gist options
  • Save Fiooodooor/7f75a4ef421dadb70af55f38eaa104c9 to your computer and use it in GitHub Desktop.
Save Fiooodooor/7f75a4ef421dadb70af55f38eaa104c9 to your computer and use it in GitHub Desktop.
https://x.com/i/grok/share/UZUEebJmIIqhEFdSZDUZYukDe
"Configuring HCI Harvester for VLAN 1076": https://x.com/i/grok/share/motnb7lGG6VLPnop4m9ctsxVi
"Ansible Script for PXE Boot via BMC LAN": https://x.com/i/grok/share/Z4Cxy97XDEjGbWMAnJEptkSln
"Ansible Script for PXE Boot via BMC": https://x.com/i/grok/share/8dU03Skr2MCECoGw7SJsscXsC
"Harvester Cluster Networking and Hugepages Setup": https://x.com/i/grok/share/uWufwtXjrgOg11yc3Q64qk9ke
I) I have a Harvester Cluster, RKE2 up and running and accessible under ip address 10.123.235.200/22. I have also Rancher standalone deployed as well as Argo stack.
Everything is running on Intel based CPU (Xeon) with VT-d enabled and ready to use Intel E810 network cards in each node with SRIOV support enabled.
I want to deploy a 3 node ephemeral Kubernetes cluster on Virtual Machines provisioned on demand, that uses custom build of DPDK and Ice drivers for E810 hardware network interfaces from host I a way that I could urilize it as Virtual NIC and/or VF driver inside the VM.
I need the best known method to do it in an automated way, for example using ansible, helm or just argo. The purpose of this is to test Intel ICE drivers on top of witch MTL and MCM from https://www.github.com/OpenVisualCloud are being run.
II) Reiterate above, but focus on NIC part utilizing Intel Ethernet Operator and/or SR-IOV Network Device Plugin for Kubernetes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment