Start k8s suitable for flannel CNI:
kubeadm init --pod-network-cidr=10.244.0.0/16
Copy kubeconfig:
mkdir -p /$USER/.kube && cp /etc/kubernetes/admin.conf /$USER/.kube/config
Check that bridge-nf-call-iptables
equals 1:
cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
Make sure master can run pods (in case of single node cluster):
kubectl taint nodes --all node-role.kubernetes.io/master-
Install flannel CNI:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
Install weave CNI:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
In case that an existing subnet on the host ovelaps with: 10.32.0.0/12, the file has to be edited before applying. First, get the yaml:
wget -O original-kube-weave.yaml https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')
Then, edit it to add IPALLOC_RANGE
environment variable to the daemon set spec definition to a range that does not overlap:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: IPALLOC_RANGE
value: 10.22.0.0/16
Install Genie CNI:
kubectl apply -f https://raw.githubusercontent.com/Huawei-PaaS/CNI-Genie/master/conf/1.8/genie-plugin.yaml
If no CNI annotation is added to a pod, then Genie will use its "default CNI plugin" as the CNI plugin for the pod. By default, the "default CNI plugin" is weave, to modify that, the above file has to be edited before applying. First get the file:
wget -O original-genie-plugin.yaml https://raw.githubusercontent.com/Huawei-PaaS/CNI-Genie/master/conf/1.8/genie-plugin.yaml
Then, edit it to add default_plugin
value, e.g. "flannel", in the cni_genie_network_config
data of thegenie-config
config map:
# The CNI network configuration to install on each node.
cni_genie_network_config: |-
{
"name": "k8s-pod-network",
"type": "genie",
"log_level": "info",
"datastore_type": "kubernetes",
"default_plugin": "flannel",
"hostname": "__KUBERNETES_NODE_NAME__",
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "/etc/cni/net.d/genie-kubeconfig"
},
"romana_root": "http://__ROMANA_SERVICE_HOST__:__ROMANA_SERVICE_PORT__",
"segment_label_name": "romanaSegment"
}
Following pod definition will create 2 interfaces on which a service could be created:
apiVersion: v1
kind: Pod
metadata:
name: nginx-multinet
labels:
app: my-web
annotations:
cni: "weave,flannel"
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
See that the interfaces were created, each within the subnet of its CNI:
# kubectl exec -it nginx-multinet -- ip addr
...
3: eth1@if687: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 0a:58:0a:f4:00:06 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.6/24 scope global eth1
...
685: eth0@if686: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1376 qdisc noqueue state UP
link/ether ba:c4:73:d6:41:cf brd ff:ff:ff:ff:ff:ff
inet 10.22.0.5/16 brd 10.22.255.255 scope global eth0
...
Routing is also set correctly:
# kubectl exec -it nginx-multinet-service -- ip route
default via 10.22.0.1 dev eth0
10.22.0.0/16 dev eth0 proto kernel scope link src 10.22.0.5
10.244.0.0/24 dev eth1 proto kernel scope link src 10.244.0.6
10.244.0.0/16 via 10.244.0.1 dev eth1
Service creation:
kubectl create service clusterip my-web --tcp=8888:80
Would allow access to nginx via the weave network over port 8888, since it is the one that gets eth0.
Following pod definition will create 2 interfaces on which a service could be created:
apiVersion: v1
kind: Pod
metadata:
name: cirros-flannel-2
annotations:
cni: "flannel,flannel"
spec:
containers:
- name: cirros
image: cirros
resources:
limits:
memory: "128Mi"
command: ["sleep", "1000"]
See that the interfaces were created, each within the flannel subnet:
# kubectl exec -it cirros-flannel-2 -- ip addr
...
3: eth0@if2234: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 0a:58:0a:f4:00:0b brd ff:ff:ff:ff:ff:ff
inet 10.244.0.11/24 scope global eth0
...
5: eth1@if2235: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 0a:58:0a:f4:00:0c brd ff:ff:ff:ff:ff:ff
inet 10.244.0.12/24 scope global eth1
...
Routing is set as follows:
# kubectl exec -it cirros-flannel-2 -- ip route
default via 10.244.0.1 dev eth0
10.244.0.0/24 dev eth0 proto kernel scope link src 10.244.0.11
10.244.0.0/24 dev eth1 proto kernel scope link src 10.244.0.12
10.244.0.0/16 via 10.244.0.1 dev eth0
However, this does not help as the returning packets are lost. E.g.:
kubectl exec cirros-flannel-2 -- /bin/sh -c "curl --interface eth0 -I www.google.com"
Works well, but:
kubectl exec cirros-flannel-2 -- /bin/sh -c "curl --interface eth1 -I www.google.com"
Times out (even though the outgoing packets are seen on the host).