This repository covers an opinionated approach to multicluster service mesh.
Most of the content for this proof of concept will be housed in the source repositories for specific projects.
Use the right tool for the right job. Just because this is an option, doesn't mean that it is the right option!
Reasons you might want a Multi Cluster Service Mesh:
It's cool!I want to work around my networking team- Isolated fault domains
- Unified trust model
- Multiple Networks
Reasons you might not want a Multi Cluster Service Mesh:
- Istio is too complicated
- Resources are already limited
- Latency is public enemy number one
Submariner, flat-networks,...
If you're not already using a service mesh, I would actually suggest trying this.
Consul, Linkerd, AWS App Mesh,...
Istio is supported by Rancher. (I wanted to see if our version is compatible) Other services mesh solutions may come into play with the adoption of
Service Mesh Hub
There are two options:
As the istio documentation points out, these methods aren't mutually exclusive. The more robust option that allows for failover is
Replicated Control Plane
. This is the model the demos below will be using.
Rancher installs via a helm chart. The istio community is moving away from that model.
The following demos will be using init containers to setup the network. (Avoid using the istio CNI unless you have a security dictate)
Istio 1.4.7 (The version I'll be running through Rancher) has made it easy to generate a root-ca and intermediate-ca's. Certs
You can trick rancher into thinking istio is enabled (and install an unsupported version) by installing an app
in the system
project with the name cluster-istio
in the namespace istio-system
.
Additional Helm Configuration:
global.podDNSSearchNamespaces[0]: global
global.podDNSSearchNamespaces[1]: "{{ valueOrDefault .DeploymentMeta.Namespace \"default\" }}.global"
global.multiCluster.enabled: true
global.controlPlaneSecurityEnabled: true
security.selfSigned: false
istiocoredns.enabled: true
gateways.istio-egressgateway.enabled: true
gateways.istio-egressgateway.env.ISTIO_META_REQUESTED_NETWORK_VIEW: "external"
global.controlPlaneSecurityEnabled: true
global.mtls.enabled: true
The demos are being run in ec2 instances with the cloud provider enabled. This allows me to use loadbalancers for ingress gateways.
Use
kubectx
KUBECONFIG=~/.kube/cluster1:~/.kube/cluster2:... kubectl config view --merge --flatten > ~/.kube/config
rancher clusters ls --format '{{.Cluster.ID}}' | xargs -I "{}" -n 1 bash -c "kubectl konfig import --save <(rancher clusters kubeconfig {})"
The additional step of updateing CoreDNS should use the
CoreDNS (>=1.4.0)
tab.
Running Bookinfo with each type of application being hosted from a different cluster.
Automating the process of spanning multiple clusters.
Adopting other mesh types.