Skip to content

Instantly share code, notes, and snippets.

@aojea
Last active May 10, 2023 08:20
Show Gist options
  • Save aojea/369ad6a5d4cbb6b0fbcdd3dd909d9887 to your computer and use it in GitHub Desktop.
Save aojea/369ad6a5d4cbb6b0fbcdd3dd909d9887 to your computer and use it in GitHub Desktop.
Service session affinity

Kubernetes Service Session Affinity

References:

Kube-proxy iptables consider affinity per Service:ServicePort, each ServicePort in the same Service has a different affinity.

There is no distributed consistency between nodes to keep track of the Services sessions, when using LoadBalancer type Services, externalTrafficPolicy must be set to Local or the LoadBalancer must ensure the affinity of the session (same tuple hits the same Kubernete node)

Testing Cilium

Create a DPv2 cluster

kubectl get pods -A | grep anetd
kube-system   anetd-2rpvj                                                1/1     Running   0          26d
kube-system   anetd-2vpqv                                                1/1     Running   0          26d

Create a Service exposing 2 ports, the example returns the pod name on the url handle /hostname

 kubectl apply -f https://gist.githubusercontent.com/aojea/369ad6a5d4cbb6b0fbcdd3dd909d9887/raw/0770375a136361fc55b8313b00866abfe9780b60/loadbalancer.yaml
deployment.apps/test-deployment unchanged
service/lb-service created

Connect to each of the ports and check if affinity is maintained:

 kubectl get service/lb-service
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                         AGE
lb-service   LoadBalancer   10.92.3.213   35.205.157.206   8080:31000/TCP,7070:31989/TCP   6m35s

It seems cilium behaves like kube-proxy iptables, each Service::ServicePort is independent

aojea@aojea:~/src/kubernetes$ curl 35.205.157.206:7070/hostname
test-deployment-55cd4cd9dc-dw44l
aojea@aojea:~/src/kubernetes$ curl 35.205.157.206:8080/hostname
test-deployment-55cd4cd9dc-fzbf5

Let's check affinity

$ for i in $(seq 1 20); do  { curl 35.205.157.206:7070/hostname ; echo ; } >> port7070.log ; done
$ sort port7070.log | uniq -c
     20 test-deployment-55cd4cd9dc-dw44l

and for port 8080

 for i in $(seq 1 20); do  { curl 35.205.157.206:8080/hostname ; echo ; } >> port8080.log ; done
 sort port8080.log | uniq -c
     20 test-deployment-55cd4cd9dc-fzbf5
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: MyApp
spec:
replicas: 10
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: agnhost8080
image: k8s.gcr.io/e2e-test-images/agnhost:2.41
args:
- netexec
- --http-port=8080
- --udp-port=8080
ports:
- containerPort: 8080
- name: agnhost7070
image: k8s.gcr.io/e2e-test-images/agnhost:2.41
args:
- netexec
- --http-port=7070
- --udp-port=7070
ports:
- containerPort: 7070
---
apiVersion: v1
kind: Service
metadata:
name: lb-service
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: MyApp
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ports:
- name: port8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port7070
protocol: TCP
port: 7070
targetPort: 7070
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment