Skip to content

Instantly share code, notes, and snippets.

@Yapcheekian
Last active July 16, 2024 10:58
Show Gist options
  • Save Yapcheekian/8c3adb6816d904994941aebad7dd903d to your computer and use it in GitHub Desktop.
Save Yapcheekian/8c3adb6816d904994941aebad7dd903d to your computer and use it in GitHub Desktop.
Notes on linux, container and kubernetes networking commands and concepts

Scenario 1: only 2 container (red and blue) created

ip link add veth-red type veth peer name veth-blue

ip link set veth-red netns red

ip link set veth-blue netns blue

ip -n red addr add 192.168.15.1 dev veth-red

ip -n blue addr add 192.168.15.2 dev veth-blue

ip -n red link set veth-red up

ip -n blue link set veth-blue up

This is not scalable as you will have to connect every container one by one. E.g assuming you have 4 containers, you will need to create 6 virtual cable in order to allow all containers communicate among each other

Scenario 2: multiple containers created

ip link add v-net-0 type bridge

ip link set dev v-net-0 up

ip link add veth-red type veth peer name veth-red-br

ip link add veth-blue type veth peer name veth-blue-br

ip link set veth-red netns red

ip link set veth-red-br master v-net-0

ip link set veth-blue netns blue

ip link set veth-red-br master v-net-0

ip -n red addr add 192.168.15.1 dev veth-red

ip -n blue addr add 192.168.15.2 dev veth-blue

ip -n red link set veth-red up

ip -n blue link set veth-blue up

Now everytime a container is spinned up, you will only have to create one virtual cable and connect both end to the newly created container and virtual switch(v-net-0 in this case) respectively

Scenario 3: allow communication between virtual bridge and host

ip addr add 192.168.15.5/24 dev v-net-0 (for the host concern, bridge is just another interface exist on the machine, same as eth0)

Scenario 4: allow communnication between container and WAN (assuming bridge ip is 192.168.15.5, host ip is 192.168.1.2 and remote machine ip is 192.168.1.3)

ip netns exec blue ip route add 192.168.1.0/24 via 192.168.15.5 (blue container only know about bridge interface)

ip netns exec blue ip route add default via 192.168.15.5 (route connect to internet)

iptables -t nat -A POSTROUTING -s 192.168.15.0/24 -j MASQUERADE (container ip need to be translated to host ip as remote network have no idea about private network on the host machine)

iptables -t nat -A PREROUTING --dport 80 -to-destination 192.168.15.2:80 -j DNAT (use port forward to tell every packet destined to port 80 on host machine is forwarded to the ip assigned to a particular container 192.168.15.2 in this case)

Scenario 5: CNI in kubernetes

CNI is nothing but a network script to perform network task describe above to allow container/pod communictions. The simplified version of script look something like this:

ADD)
  # Create veth pair
  ip link add ...
  # Attach veth pair
  ip link set ...
  # Assign IP address
  ip addr add ...
  # Bring up interface
  ip link set ... up
DEL)
  # Delete veth pair
  ip link del ...

kubelet on each node, upon creating a new container, will execute this script based on the configurations below:

--cni-conf-dir=/etc/cni/net.d (CNI configuration)
--cni-bin-dir=/opt/cni/bin (script directory)
./net-script.sh add <container> <namespace>

EKS CNI

https://github.com/aws/amazon-vpc-cni-k8s

aws-node daemonset will install aws-cni binary on /opt/cni/bin. Without this CNI binary, pod cannot be created

Warning  FailedCreatePodSandBox  13s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "8466762949e9b04fe446ac14e41c13d91959c5d82f4da94e69f75f70582ec50d" network for pod "test-66dfd4b98f-74v5k": networkPlugin cni failed to set up pod "test-66dfd4b98f-74v5k_default" network: failed to find plugin "aws-cni" in path [/opt/cni/bin], failed to clean up sandbox container "8466762949e9b04fe446ac14e41c13d91959c5d82f4da94e69f75f70582ec50d" network for pod "test-66dfd4b98f-74v5k": networkPlugin cni failed to teardown pod "test-66dfd4b98f-74v5k_default" network: failed to find plugin "aws-cni" in path [/opt/cni/bin]]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment