References:
- Session affinity for same service and different ports is forwarded to different endpoints for each destination port
- demote service ClientIP affinity timeout tests from conformance
Kube-proxy iptables consider affinity per Service:ServicePort, each ServicePort in the same Service has a different affinity.
There is no distributed consistency between nodes to keep track of the Services sessions, when using LoadBalancer type Services,
externalTrafficPolicy
must be set to Local
or the LoadBalancer must ensure the affinity of the session (same tuple hits the same Kubernete node)
Create a DPv2 cluster
kubectl get pods -A | grep anetd
kube-system anetd-2rpvj 1/1 Running 0 26d
kube-system anetd-2vpqv 1/1 Running 0 26d
Create a Service exposing 2 ports, the example returns the pod name on the url handle /hostname
kubectl apply -f https://gist.githubusercontent.com/aojea/369ad6a5d4cbb6b0fbcdd3dd909d9887/raw/0770375a136361fc55b8313b00866abfe9780b60/loadbalancer.yaml
deployment.apps/test-deployment unchanged
service/lb-service created
Connect to each of the ports and check if affinity is maintained:
kubectl get service/lb-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
lb-service LoadBalancer 10.92.3.213 35.205.157.206 8080:31000/TCP,7070:31989/TCP 6m35s
It seems cilium behaves like kube-proxy iptables, each Service::ServicePort is independent
aojea@aojea:~/src/kubernetes$ curl 35.205.157.206:7070/hostname
test-deployment-55cd4cd9dc-dw44l
aojea@aojea:~/src/kubernetes$ curl 35.205.157.206:8080/hostname
test-deployment-55cd4cd9dc-fzbf5
Let's check affinity
$ for i in $(seq 1 20); do { curl 35.205.157.206:7070/hostname ; echo ; } >> port7070.log ; done
$ sort port7070.log | uniq -c
20 test-deployment-55cd4cd9dc-dw44l
and for port 8080
for i in $(seq 1 20); do { curl 35.205.157.206:8080/hostname ; echo ; } >> port8080.log ; done
sort port8080.log | uniq -c
20 test-deployment-55cd4cd9dc-fzbf5