The following guide is based on using a newly created Kubernetes cluster that plans to use Istio for its service mesh layer. This guide is not intended for backwards compatibility of injecting Istio into a cluster that has pods currently running.
These pre-requesites determine the resources and software versions required.
- PKS 1.2=<
- NSX-T 2.3=<
- Medium LoadBalancer for the amount of Virtual Servers needed.
- PKS Plan with at least 1 Master and 2 worker nodes
- Smart Cluster from either developer or production
kubectl
installed on the local hostvke
is installed on the local host, and authenticated to the VMware Cloud PKS cluster (download)
NOTE: It's not necessary to Enable Privileged Containers within the PKS Plan/Profile.
This guide is intended for new Kubernetes clusters and not for retrofitting clusters with pods that currently exist
Start with a fresh Kubernetes cluster using a Medium sized LoadBalancer so there are enough virtual ports.
$ pks create-cluster pks-cluster-med-lb-istio --external-hostname k8s-cluster-med-lb-istio --plan large --num-nodes 3 --network-profile network-profile-medium
$ pks get-credentials pks-cluster-med-lb-istio
Create a smart cluster in any region in development or production
$ vke cluster auth setup my-cluster
The following is a step-by-step guide taken from the Quick Start, Installation with Helm.
On the a machine with kubectl
access to the Kubernetes cluster, download Istio package that has all the files needed. The export
command will put the istioctl
command line tool in your PATH
.
$ curl -L https://git.io/getLatestIstio | sh -
$ cd istio-[version number]
$ export PATH=$PWD/bin:$PATH
Helm is the package manager for Kubernetes, like apt/yum/homebrew, that runs on the local machine. This process involves creating a helm template from the downloaded Istio files. Using Tiller along with Helm Install is not recommeded at this time as errors are likely to occur during the CRD installation.
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
Install Istio’s Custom Resource Definitions via kubectl apply
, and wait a few seconds for the CRDs to be committed in the kube-apiserver:
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
Render Istio’s core components to a Kubernetes manifest called istio.yaml:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio.yaml
The following section only applies to using VMware Cloud PKS
Update istio.yaml
to comment out the nodePorts for http2
, https
, and tcp
of the istio-ingressgateway service. The nodePorts are outside the allowed node port ranges for a VMware Cloud PKS cluster.
---
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
chart: gateways-1.0.3
release: istio
heritage: Tiller
app: istio-ingressgateway
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
istio: ingressgateway
ports:
-
name: http2
# nodePort: 31380
port: 80
targetPort: 80
-
name: https
# nodePort: 31390
port: 443
-
name: tcp
# nodePort: 31400
port: 31400
-
name: tcp-pilot-grpc-tls
port: 15011
targetPort: 15011
-
name: tcp-citadel-grpc-tls
port: 8060
targetPort: 8060
-
name: tcp-dns-tls
port: 853
targetPort: 853
-
name: http2-prometheus
port: 15030
targetPort: 15030
-
name: http2-grafana
port: 15031
targetPort: 15031
Update istio.yaml
to distribute pods across zones. By default, Istio uses 1 replica for its control plane pods. If we want to make sure Istio control plane pods are distributed across different nodes/zones, we can use pod anti-affinity. To do this, we have to modify deployment manifest for all control plane components:
- istio-citadel
- istio-pilot
- istio-galley
Change the replicas to 3
and the topologyKey
to kubernetes.io/zone
to have pods be distributed across zones. For istio-citadel and istio-galley change the labelSelector accordingly.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-pilot
namespace: istio-system
..........
spec:
replicas: 3
template:
metadata:
labels:
istio: pilot
app: pilot
..........
..........
..........
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: istio
operator: In
values:
- pilot
- key: app
operator: In
values:
- pilot
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-citadel
namespace: istio-system
..........
spec:
replicas: 3
template:
metadata:
labels:
istio: citadel
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-citadel-service-account
containers:
..........
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: istio
operator: In
values:
- pilot
- key: app
operator: In
values:
- pilot
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-galley
namespace: istio-system
..........
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
istio: galley
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: istio-galley-service-account
containers:
- name: validator
..........
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: istio
operator: In
values:
- pilot
- key: app
operator: In
values:
- pilot
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
..........
For both PKS and VMware Cloud PKS, Create a namespace and install Istio via the Helm manifest:
$ kubectl create namespace istio-system
$ kubectl apply -f $HOME/istio.yaml
Ensure the following Kubernetes services are deployed: istio-pilot
, istio-ingressgateway
, istio-policy
, istio-telemetry
, prometheus
, istio-galley
, and, optionally, istio-sidecar-injector
.
$ kubectl get svc -n istio-system
Ensure the corresponding Kubernetes pods are deployed and all containers are up and running: istio-pilot-*
, istio-ingressgateway-*
, istio-egressgateway-*
, istio-policy-*
, istio-telemetry-*
, istio-citadel-*
, prometheus-*
, istio-galley-*
, and, optionally, istio-sidecar-injector-*
.
$ kubectl get pods -n istio-system
The Bookinfo App is a standard testing application to verify a successful deployment.
Using a cluster with automatic sidecar injection enabled, label the default
namespace with istio-injection=enabled
.
$ kubectl label namespace default istio-injection=enabled
Simply deploy the services using kubectl
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Confirm all services and pods are correctly defined and running:
$ kubectl get services
$ kubectl get pods
Now that the Bookinfo services are up and running, you need to make the application accessible from outside of your Kubernetes cluster, e.g., from a browser. An Istio Gateway is used for this purpose. Since PKS uses NSX-T, LoadBalancers will be used instead of NodePort.
Define the ingress gateway for the application:
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Confirm the gateway has been created:
$ kubectl get gateway
NAME AGE
bookinfo-gateway 32s
Execute the following command to determine the NSX-T external load balancer IP and service ports:
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.100.200.220 100.64.80.23,24.24.24.98 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30230/TCP,8060:30211/TCP,853:32055/TCP,15030:30556/TCP,15031:31751/TCP 1h
Set the ingress IP and ports:
$ export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
$ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')
Set GATEWAY_URL
:
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
To confirm that the Bookinfo application is running, run the following curl command:
$ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
200
You can also point your browser to http://$GATEWAY_URL/productpage to view the Bookinfo web page. If you refresh the page several times, you should see different versions of reviews shown in productpage, presented in a round robin style (red stars, black stars, no stars), since we haven’t yet used Istio to control the version routing.
To uninstall/delete Istio, use the following command:
$ kubectl delete -f $HOME/istio.yaml
Delete the CRDs with:
$ kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system