Skip to content

Instantly share code, notes, and snippets.

@ormergi
Last active October 16, 2025 08:59
Show Gist options
  • Save ormergi/f48e65ada381058febd1dc4c2fa21c19 to your computer and use it in GitHub Desktop.
Save ormergi/f48e65ada381058febd1dc4c2fa21c19 to your computer and use it in GitHub Desktop.
Kubevirt and BGP

Kubevirt and BGP

There are multiple solutions for connecting k8s cluster over BGP, this doc focuses on using OVN-Kubernetes FRR-k8s and the integration between them.

OVN-Kubernetes serves as the cluster network platform. FRR-k8s install FRR instance in each cluster node running BGP server.

OVN-Kubernetes support exchanging routes of the cluster networks using FRR-k8s.

Default-network

1. Import provider network routes to default cluster network

Enable VMs / pods connected to cluster default network access services on a provider network.

  1. Create FRRConfiguration CR for importing provider network routes
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
  labels:
    use-for-advertisements: default
  name: receive-filtered
  namespace: frr-k8s-system
spec:
  nodeSelector: {}
  bgp:
    routers:
    - asn: 64512 # ASN to use for the local session end
      neighbors:
      - address: 192.168.100.10 # BGP peer address
        asn: 64512 # ASN to use for the remote session end
        disableMP: true # Ensure diffrent session for IPv4 and IPv6 peering
        toReceive: # prefixes to recicve from neighbor (BGP peer) 
          allowed:
            mode: filtered 
            prefixes:
            - prefix: 22.100.0.0/16 # provider network prefix

2. Export cluster network routes to provider network

Expose VMs / pods to outside the clsuter, enable provider network clients direct access to workloads running inside the cluster. No NAT or proxies in between, fully routed. No static routes maintananace needed.

Create RouteAdvertise (RA) CR for advertizing the cluster network routes. Building on top previsous example, the RA selects the FRRConfigureation CR to use as baseline for adertising routes.

---
apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
 name: default
spec:
 nodeSelector: {} # advertize routes of all nodes
 advertisements:
 - PodNetwork
 networkSelectors: 
 - networkSelectionType: DefaultNetwork
 frrConfigurationSelector: # select the CR to use as baseline for adverising routes
   matchLabels:
     kuberetnes.io/metadata.name: receive-filtered

User-defined networks (UDN)

OVN-Kubernetes features segmentation capabilities enable segregating the default cluster network to multiple isolated overlay networks.

The feature provides multiple variations of overlay networks and diffrent topologies.

For KubeVirt use-case teh most suitable flavor is primary layer2 user-defined network, with persistent IPs enabled.

  • Primary - Acts as the primary network inside the VM / pod.
  • Layer2 - Network uses single subnet for all connected VMs / pods.
  • Persistent-IPs - Ensure VMs retain the same IP address along its lifecycle including migration and restarts.

OVN-Kubernetes support exchanging routes of user-defined network as well.

Creating user-defined network

User-defined network is defined per namespaces, and it can span over - or connect - multiple namespaces.

Each namespace must be labeled with k8s.ovn.org/primary-user-defined-network on creation. The label cannot be added to existing Namespace.

---
apiVersion: v1
kind: Namespace
metadata:
  name: blue
  labels:
    k8s.ovn.org/primary-user-defined-network: ""
    network: tlv
---
apiVersion: v1
kind: Namespace
metadata:
  name: red
  labels:
    k8s.ovn.org/primary-user-defined-network: ""
    network: tlv

In this example we use the ClusterUserDefinedNetwork CR to create the user-defined network.

apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
  name: tlv
spec:
  namespaceSelector:
    matchLabels:
      network: tlv
  network:
    topology: Layer2 # use single subnetwork for all connected VMs / pods
    layer2: 
      role: Primary # network interface inside the pod will act as primary interface
      subnets: [22.100.0.0/16] 
      ipam: {lifecycle: Persistent} # ensure VMs will have their IP retained on live-migration, Stop or Restart

The network will span over blue and red namespaces, VMs / pods in these namespaces can access each other but cannot access VMs / pods in the cluster.

Connecting VMs to user-defined network

  1. Enable KubeVirt features and register network binding for OVN-Kubernetes network-segmentation:
kubectl -n kubevirt patch kubevirt kubevirt --type=json --patch '[
  {"op":"add","path":"/spec/configuration/developerConfiguration","value":{"featureGates":[]}},
  {"op":"add","path":"/spec/configuration/developerConfiguration/featureGates/-","value":"NetworkBindingPlugins"},
  {"op":"add","path":"/spec/configuration/developerConfiguration/featureGates/-","value":"DynamicPodInterfaceNaming"},
  {"op":"add","path":"/spec/configuration/network","value":{}},
  {"op":"add","path":"/spec/configuration/network/binding","value":{"l2bridge":{"domainAttachmentType":"managedTap","migration":{}}}}
]'
  1. Specify the registered binding plugin for connecting to pod network:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: myvm
spec:
  runningStrategy: Always
  template:
    spec:
      domain:
        memory:
          guest: 512M
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          interfaces:
          - name: overlay
            binding:
              name: l2bridge  # registered binding plugin suitable for ovn-kubernetes network segmentation 
          rng: {}
      networks:
      - name: overlay 
        pod: {}
      volumes:
      - containerDisk:
          image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.5.0
        name: containerdisk
      terminationGracePeriodSeconds: 0

1. Import provider network routes to default and user-defined networks

Enable VMs / pods connected to default or user-defined network access services on a provider network.

Create FRRConfiguration CR for importing provider network routes

apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
  name: receive-default-tlv
  namespace: frr-k8s-system
spec:
  nodeSelector: {}
  bgp:
    routers:
    - asn: 64512 
      neighbors:
      - address: 192.168.100.10 
        asn: 64512 
        disableMP: true 
        toReceive: 
          allowed:
            mode: filtered 
            prefixes:
            - prefix: 22.100.0.0/16
    - asn: 64512 
      vrf: tlv
      imports:
      - vrf: default

Similar to pervious example, with the addition of the following rule, to add the imported routes to the user-defined network routing table.

spec:
  bgp:
    routers:
    - asn: 64512 
      vrf: tlv
      imports:
      - vrf: default
    ...

I.e.: Leaking the imported routes from the default VRF to the user-defined network VRF

2. Export default and user-defined networks routes to provider network

Expose VMs / pods connected to default or user-defined networks outside the cluster, enable provider network clients direct access.

Create RouteAdvertise (RA) CR for advertizing the default and user-defined networks routes. Building on top previous example, the RA selects the FRRConfigureation CR to use as baseline for advertising routes.

apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
  name: default-tlv
spec:
  advertisements:
  - PodNetwork
  nodeSelector: {}
  frrConfigurationSelector:
    matchLabels:
     kuberetnes.io/metadata.name: receive-default-tlv
  networkSelectors:
  - networkSelectionType: DefaultNetwork
  - networkSelectionType: ClusterUserDefinedNetworks
    clusterUserDefinedNetworkSelector:
      networkSelector:
        matchLabels:
          kuberetnes.io/metadata.name: tlv

3. VRF-lite: Extend user-defined networks as VPNs beyond the cluster

OVN-Kubernetes enable export user-defined network routes over a BGP session that is established over the network's VRF, this configuration is referred to as VRF-Lite.

Each user-defined network will have associated a VRF, assume we have UDN from previous example tlv:

$ kubectl debug node/${node} -it --image=busybox -- ip vrf show
Name              Table
-----------------------
tlv        1074
$
$ kubectl debug node/${node} -it --image=busybox -- ip link show tlv
75: tlv: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 26:d8:c8:7c:15:db brd ff:ff:ff:ff:ff:ff

Once the user-defined network VRF controls the provider network interface, BGP sessions should establish over the provider network interface:

$ kubectl debug node/${node} -it --image=busybox -- ip link show eth2
$ 3: eth2@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master tlv state UP mode DEFAULT group default qlen 1000 
  link/ether aa:38:8f:a0:f6:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0

The provider network interface on each node could later on wired to VPN tunnel, allowing access to the user-deified network from the other tunnel end. I.e.: spanning user-defined network across multiple clusters or multiple provider networks.

  1. Set UDN VRF to control the provider network interface:
  provider_net_iface=eth2
  udn_vrf=tlv
  for node in $(kubectl get no -o custom-columns=:.metadata.name --no-headers); do
    kubectl debug node/${node} -it --image=busybox -- ip link set dev ${provider_net_iface} master ${udn_vrf}
  done

Or using Kubernetes-nmstate:

apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
 name: vrflite-tlv-control-eth2
spec:
 desiredState:
   interfaces:
   - name: eth2  # provider network interface
     state: up
     controller: tlv # udn VRF name
  1. Import provider network routes to user-defined network

Enable VMs / pods connected to user-defined network only access services on a provider network. Over the provider network additional interface (the UDN VRF controls).

Create FRRConfiguration CR for importing provider network routes

---
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
  name: vrflite-tlv
spec:
  nodeSelector: {}
  bgp:
    routers:
    - asn: 64512
      vrf: tlv # import routes into the UDN VRF
      neighbors:
      - address: 192.168.200.20 # BGP peer address
        asn: 64512
        disableMP: true
        toReceive:
          allowed:
            mode: filtered
            prefixes:
            - prefix: 33.100.0.0/16 # provider network subnet

Similar to previous example with the addition of spec.bgp.routes.[].vrf - configures routes to be imported directly into the user-defined network routing table (or the UDN VRF).

  1. Export cluster network routes to provider network

Expose VMs / pods connected user-defined network only outside the cluster, enable provider network clients direct access. Over the provider network additional interface (the UDN VRF controls).

Create RouteAdvertise (RA) CR for advertizing the default and user-defined networks routes. Building on top previous example, the RA selects the FRRConfigureation CR to use as baseline for advertising routes.

apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
  name: vrflite-tlv
spec:
  advertisements:
  - PodNetwork
  nodeSelector: {}
  frrConfigurationSelector: # select FRRConfiguration as baseline
    matchLabels:
      kuberetnes.io/metadata.name: vrflite-tlv
  networkSelectors:
  - networkSelectionType: ClusterUserDefinedNetworks # select udn for advertisement
    clusterUserDefinedNetworkSelector:
      networkSelector:
        matchLabels:
          kuberetnes.io/metadata.name: tlv
  targetVRF: tlv # VRF for which route will be advertise of

Similar to previous example with the additions:

  • targetVRF - advertise routs over the specified VRF
  • clusterUserDefinedNetworkSelector - selectes the user-defined network created previously
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment