Skip to content

Instantly share code, notes, and snippets.

@bzub
Last active April 13, 2017 18:03
Show Gist options
  • Save bzub/7d29de52b57a272d1fffcad8f1cc490d to your computer and use it in GitHub Desktop.
Save bzub/7d29de52b57a272d1fffcad8f1cc490d to your computer and use it in GitHub Desktop.
BGP Routed VIPs via Matchbox + Calico/Kubernetes Integration

Container Linux Config / Ignition

Here's the relevant bits that configure IPs on loopback via systemd-networkd. They are the same on all my nodes, so no variables.

networkd:
  units:
    - name: 00-vip-lo.network
      contents: |
        [Match]
        Name=lo
        [Network]
        Address=127.0.0.1/8
        Address=10.10.200.1/32
        Address=10.10.100.1/32
        DHCP=no
    - name: 00-calico.network
      content: |
        [Match]
        Name=cali*
        [Link]
        Unmanaged=true

00-calico.network is there to ensure networkd doesn't touch the Calico virtual interfaces, but please note that Unmanaged=true is only available in the newly released systemd v233.

I also have a service that adds a blackhole route for the VIPs, which allows BGP multipath to work with duplicate IPs on localhost.

systemd:
  units:
    - name: vip-lo-blackhole.service
      enable: true
      contents: |
        [Unit]
        Description=Blackhole static route for VIPs
        After=network.target

        [Service]
        Type=oneshot
        RemainAfterExit=yes
        ExecStart=/usr/bin/ip route add blackhole 10.10.200.1/32
        ExecStart=/usr/bin/ip route add blackhole 10.10.100.1/32

        [Install]
        WantedBy=multi-user.target

And I modify sshd to use a specific non-VIP, so I can use nginx-proxy to route port 22 to gitlab SSH within Kubernetes:

    - name: sshd.socket
      enable: true
      dropins:
        - name: 50-bind-address.conf
          contents: |
            [Socket]
            ListenStream=
            FreeBind=
            ListenStream=10.10.2.{{.node_number}}:22
            FreeBind=true

Make Calico advertise the VIPs

First you should have a calicoctl pod from the official installation guide.

I use kubectl cp to copy the following HostEndpoint definitions to the calicoctl pod.

- apiVersion: v1
  kind: hostEndpoint
  metadata:
    labels:
      public: "true"
    name: node1-public-vip1
    node: node1.zbrbdl
  spec:
    expectedIPs:
    - 10.10.100.1
    interfaceName: lo
    profiles:
    - vip
- apiVersion: v1
  kind: hostEndpoint
  metadata:
    labels:
      public: "false"
    name: node1-vip0
    node: node1.zbrbdl
  spec:
    expectedIPs:
    - 10.10.200.1
    interfaceName: lo
    profiles:
    - vip
- apiVersion: v1
  kind: hostEndpoint
  metadata:
    labels:
      public: "true"
    name: node2-public-vip1
    node: node2.zbrbdl
  spec:
    expectedIPs:
    - 10.10.100.1
    interfaceName: lo
    profiles:
    - vip
- apiVersion: v1
  kind: hostEndpoint
  metadata:
    labels:
      public: "false"
    name: node2-vip0
    node: node2.zbrbdl
  spec:
    expectedIPs:
    - 10.10.200.1
    interfaceName: lo
    profiles:
    - vip
- apiVersion: v1
  kind: hostEndpoint
  metadata:
    labels:
      public: "true"
    name: node3-public-vip1
    node: node3.zbrbdl
  spec:
    expectedIPs:
    - 10.10.100.1
    interfaceName: lo
    profiles:
    - vip
- apiVersion: v1
  kind: hostEndpoint
  metadata:
    labels:
      public: "false"
    name: node3-vip0
    node: node3.zbrbdl
  spec:
    expectedIPs:
    - 10.10.200.1
    interfaceName: lo
    profiles:
    - vip

Then create/apply the HostEndpoint resources with commands like this:

kubectl -n kube-system exec calicoctl -- /calicoctl apply -f /path/to/node1-hep.yaml

You can also create the vip profiles with calicoctl in a similar fashion if you want custom network policy to govern traffic ingress/egress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment