Istio is an open platform to connect, manage, and secure microservices. This guide provides step-by-step instructions for running Istio 0.8.0 on Cisco Container Platform (CCP) 1.0.1. Use the official documentation to learn more about Istio.
The following prerequisites must be met before deploying Istio:
- CCP installed and a tenant cluster created according to the CCP Installation Guide
- The
kubectl
Kubernetes client. Follow the kubectl installation guide if needed. - CCP tenant cluster credentials. Use the CCP Installation Guide to generate and download the cluster credentials.
Istio uses Helm to render a Kubernetes manifest. This manifest is used by kubectl
to deploy the Istio control-plane to your tenant cluster.
Download and setup the helm
client binary:
export HELM_VERSION=2.9.1
export OS=darwin # Use "darwin" for Mac and "linux" for Linux-based systems
curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v$HELM_VERSION-$OS-amd64.tar.gz
tar -xvf helm-v$HELM_VERSION-$OS-amd64.tar.gz
chmod +x $OS-amd64/helm
mv $OS-amd64/helm /usr/local/bin
Download and setup the Istio deployment templates and istioctl
client binary:
export ISTIO_VERSION=0.8.0
curl -L https://git.io/getLatestIstio | sh -
chmod +x istio-$ISTIO_VERSION/bin/istioctl
mv istio-$ISTIO_VERSION/bin/istioctl /usr/local/bin/
Use the helm template
command to render the Kubernetes manifest. install/kubernetes/helm/istio
specifies the root location of the Istio
Helm charts. The --set
flag is used to modify default values of the Istio Helm charts. --name
is used to specify a release name
and > $HOME/istio.yaml
is used to specify a location to store the rendered manifest:
helm template istio-$ISTIO_VERSION/install/kubernetes/helm/istio \
--set ingressgateway.service.type=NodePort \
--name istio --namespace istio-system > $HOME/istio.yaml
Note: The ingressgateway.service.type
key must be set to NodePort
. This value is used by the Istio Ingress and is required since the default value of LoadBalancer
can only be used for clusters supporting an external cloud load-balancer.
Label the default
namespace with istio-injection=enabled
:
kubectl label namespace default istio-injection=enabled
Create the namespace used to run the Istio control-plane. This name must match the name value of --namespace
used in the
helm template
command:
kubectl create ns istio-system
Deploy the Istio control-plane:
kubectl create -f $HOME/istio.yaml
The Istio control-plane should now be operational. Verify the status of the Istio control-plane pods. Note: It may take several minutes for the Istio control-plane pods to achieve a Completed
or Running
status:
$ kubectl get po -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-7bdc7775c7-kmctd 1/1 Running 0 2h
istio-cleanup-old-ca-nm52m 0/1 Completed 0 2h
istio-egressgateway-795fc9b47-8glfm 1/1 Running 1 2h
istio-ingress-84659cf44c-q2dzc 1/1 Running 0 2h
istio-ingressgateway-7d89dbf85f-tfs4b 1/1 Running 1 2h
istio-mixer-post-install-pgcx2 0/1 Completed 0 2h
istio-pilot-66f4dd866c-vjv79 2/2 Running 0 2h
istio-policy-76c8896799-dqnn5 2/2 Running 0 2h
istio-sidecar-injector-645c89bc64-ksmbk 1/1 Running 0 2h
istio-statsd-prom-bridge-949999c4c-vb7cd 1/1 Running 0 2h
istio-telemetry-6554768879-sfpp4 2/2 Running 0 2h
prometheus-86cb6dd77c-m2t7z 1/1 Running 0 2h
You should see all control-plane components in a Completed
or Running
status.
Test Istio by deploying a sample application called bookinfo
. Use the official bookinfo documentation to learn more about the application:
kubectl create -f istio-$ISTIO_VERSION/samples/bookinfo/kube/bookinfo.yaml
Verify the deployment status of the bookinfo
application using the kubectl get po
command. You should see 2/2
and a Running
status for every pod:
kubectl get po
NAME READY STATUS RESTARTS AGE
<SNIP>
details-v1-7b97668445-zphbb 2/2 Running 0 50s
productpage-v1-7bbdd59459-lggvt 2/2 Running 0 49s
ratings-v1-76dc7f6b9-c69xj 2/2 Running 0 50s
reviews-v1-64545d97b4-cltlh 2/2 Running 0 50s
reviews-v2-8cb9489c6-fmc8s 2/2 Running 0 50s
reviews-v3-6bc884b456-2j6jn 2/2 Running 0 50s
Create an Ingress resource to externally expose the productpage
service:
kubectl create -f istio-$ISTIO_VERSION/samples/bookinfo/kube/bookinfo-gateway.yaml
The bookinfo application exposes the productpage
service externally using a NodePort
. This means the service can
be accessed using $NODE_IP:$NODE_PORT/$INGRESS_PATH
, where $NODE_IP
is an IP address of any worker node in the tenant
cluster, $NODE_PORT is the nodePort
value of the istio-ingress
service and $PATH is the backend path of the bookinfo
gateway
Ingress resource.
From a host with access to the cluster, set the environment variables used to construct the bookinfo product page URL:
export NODE_IP=$(kubectl get po -l istio=ingress -n istio-system -o jsonpath='{.items[0].status.hostIP}')
export NODE_PORT=$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')
Use curl to test connectivity to the productpage
Ingress:
curl -I http://$NODE_IP:$NODE_PORT/productpage
Verify that you receive a 200
response code:
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 4083
server: envoy
date: Tue, 05 Jun 2018 18:44:33 GMT
x-envoy-upstream-service-time: 6024
You have successfully deployed the bookinfo application.
Verify the Helm installation by using the helm version
command. You should receive output similar to the following:
Client: &version.Version{SemVer:"v$HELM_VERSION", GitCommit:"<SNIP>", GitTreeState:"clean"}
<SNIP>
Verify the Istio Client installation by using the istioctl version
command. You should receive output similar to the following:
Version: $ISTIO_VERSION
GitRevision: 6f9f420f0c7119ff4fa6a1966a6f6d89b1b4db84
User: root@48d5ddfd72da
Hub: docker.io/istio
GolangVersion: go1.10.1
BuildStatus: Clean
If Istio control-plane pods do not achieve a Completed
or Running
status, inspect the container logs:
kubectl logs $POD_NAME -n istio-system -c $CONTAINER_NAME
Note: You can obtain $POD_NAME from the kubectl get po -n istio-system
command and $CONTAINER_NAME from the kubectl get po/$POD_NAME -o yaml
command.
Use the official Istio troubleshooting for additional troubleshooting support.