Skip to content

Instantly share code, notes, and snippets.

@RafalSkolasinski
Last active October 8, 2024 08:59
Show Gist options
  • Save RafalSkolasinski/b41b790b1c575223251ff90311419863 to your computer and use it in GitHub Desktop.
Save RafalSkolasinski/b41b790b1c575223251ff90311419863 to your computer and use it in GitHub Desktop.
Kind & Minikube with MetalLB on MacOS

Dev K8s Networking on MacOS

Context

Unlike Linux on MacOS Docker runs inside VM. This causes issues when trying to access containers directly as their IP is not reachable from host network. It is especially annoying when trying to get a local development environment for K8s.

This short guides aim to solve the problem. Solution is: https://github.com/chipmk/docker-mac-net-connect

Assumptions:

  • Docker is Docker for Desktop (version 4.34.2 at the time of writing)
  • Docker CLI version: 27.2.0
  • docker-mac-net-connect version 0.1.3
  • kubectl version: v1.31.0
  • kind version: v0.24.0
  • minikube version: v1.34.0
  • helm version: v3.16.1

How it works?

Full details: https://github.com/chipmk/docker-mac-net-connect.

In a nutshell: it creates a simple Wireguard-based VPN between local host network and Linux VMs interface.

Best illustrated through example.

Install Docker Mac Net Connect

# Install via Homebrew
$ brew install chipmk/tap/docker-mac-net-connect

# Run the service and register it to launch at boot
$ sudo brew services start chipmk/tap/docker-mac-net-connect

Simple test

# Start NGINX container
$ docker run --rm --name nginx -d nginx

# Get Container's IP
$ docker inspect nginx --format '{{.NetworkSettings.IPAddress}}'
172.17.0.3

# Make an HTTP request directly to its IP
$ curl -I http://172.17.0.3:80
HTTP/1.1 200 OK
Server: nginx/1.27.1
Date: Sun, 15 Sep 2024 22:01:28 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Mon, 12 Aug 2024 14:21:01 GMT
Connection: keep-alive
ETag: "66ba1a4d-267"
Accept-Ranges: bytes

This works fine thanks to the VPN creted by the Docker Mac Net Connect.

Setup with Kind

Create kind cluster and install MetalLB:

# Create Kind Cluster
kind create cluster

# Install MetalLB
helm repo add metallb https://metallb.github.io/metallb && helm repo update
helm upgrade --install metallb metallb/metallb -n metallb-system --create-namespace

# Wait for MetalLB to be ready
kubectl rollout -n metallb-system status deployment metallb-controller

Gather some basic networking information

# Find IP of kind network
$ docker network inspect -f '{{.IPAM.Config}}' kind
[{...} {172.18.0.0/16  172.18.0.1 map[]}]

# Check Network's gateway
$ docker network inspect kind | gron | grep Gateway
json[0].IPAM.Config[1].Gateway = "172.18.0.1";

# Check Container's IP
$ docker inspect kind-control-plane --format '{{.NetworkSettings.Networks.kind.IPAddress}}'
172.18.0.2

From this we know that:

  • kind network is 172.18.0.0/16 (so mask is 255.255.0.0)
  • Network's Gateway is 172.18.0.1
  • Container running kind has IP 172.18.0.2

Based on safe range for IPAddressPool can be as follow:

cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - local-pool
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.18.255.1-172.18.255.255
EOF

Which we can test deploying two nginx services

# Depoy and expose nginx
$ kubectl create deployment nginx1 --image nginx
$ kubectl create deployment nginx2 --image nginx

$ kubectl expose deployment nginx1 --type=LoadBalancer --port=8080 --target-port 80
$ kubectl expose deployment nginx2 --type=LoadBalancer --port=8080 --target-port 80

# Find IPs of LoadBalancers
$ kubectl get svc | grep 
nginx1       LoadBalancer   10.96.158.176   172.18.255.5   8080:30821/TCP   20s
nginx2       LoadBalancer   10.96.193.116   172.18.255.6   8080:30883/TCP   20s

and testing access through LoadBalancers

$ curl -I 172.18.255.5:8080
Server: nginx/1.27.1
Date: Sun, 15 Sep 2024 22:13:59 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Mon, 12 Aug 2024 14:21:01 GMT
Connection: keep-alive
ETag: "66ba1a4d-267"
Accept-Ranges: bytes

$ curl -I 172.18.255.6:8080
HTTP/1.1 200 OK
Server: nginx/1.27.1
Date: Sun, 15 Sep 2024 22:14:02 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Mon, 12 Aug 2024 14:21:01 GMT
Connection: keep-alive
ETag: "66ba1a4d-267"
Accept-Ranges: bytes

Setup with Minikube

Setup for Minikube is quite similar with few small exceptions:

  • It's best to create docker network beforehand as default network has quite limited range: 192.168.49.0/24
  • Neet to remove node.kubernetes.io/exclude-from-external-load-balancers label for MetalLB to work

Create network (make sure it is not in use), start Minikube and install MetalLB

# Setup docker network (ensure subneet is not in use)
docker network create minikube --subnet=172.42.0.0/16 --gateway=172.42.0.1

# Start minikube
minikube start --driver docker --network minikube

# Adjust labels
kubectl label nodes minikube node.kubernetes.io/exclude-from-external-load-balancers-

# Install MetalLB
helm repo add metallb https://metallb.github.io/metallb && helm repo update
helm upgrade --install metallb metallb/metallb -n metallb-system --create-namespace

# Wait for MetalLB to be ready
kubectl rollout -n metallb-system status deployment metallb-controller

The right IPAddressPool is in this situation as following

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - local-pool
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.42.255.1-172.42.255.255

Deploying nginx as in previous example will now yield

$ kubectl get svc | grep nginx
nginx1       LoadBalancer   10.98.144.103   172.42.255.1   8080:32744/TCP   53m
nginx2       LoadBalancer   10.105.70.123   172.42.255.2   8080:32356/TCP   53m

Troubleshooting

Try to restart service

sudo brew services restart chipmk/tap/docker-mac-net-connect

To get more information on potential errors try

sudo brew services stop chipmk/tap/docker-mac-net-connect
sudo docker-mac-net-connect

One of common crash is that it relies on docker socket, this can be enabled in Docker Desktop options or symlinked easily

sudo ln -sf ~/.docker/run/docker.sock /var/run/docker.sock

If does not help, check https://github.com/chipmk/docker-mac-net-connect?tab=readme-ov-file#troubleshooting

References:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment