Skip to content

Instantly share code, notes, and snippets.

@danehans
Last active July 23, 2018 19:52
Show Gist options
  • Save danehans/ee7eae59f6ed866294ea1b12b3873a5f to your computer and use it in GitHub Desktop.
Save danehans/ee7eae59f6ed866294ea1b12b3873a5f to your computer and use it in GitHub Desktop.
Istio E2E Test Review

Istio E2E Test Introduction

prow/e2e-bookInfoTests.sh and prow/e2e-simpleTests.sh are automatically triggered in the "Before-Merge" stage of every PR. The full suite of E2E tests are run in the "After-Merge" stage only. Their results can be found in the Prow Dashboard and the k8s test grid.

Running Istio E2E Tests

E2e tests can be run on existing clusters by following these steps:

Step 1: Create a Kubernetes Cluster

E2E tests require a Kubernetes cluster. You can create one on Google Container Engine with the following commands. First, set environment variables that will be used throughout the steps:

CLUSTER_NAME=e2e
ZONE=$(gcloud config get-value compute/zone)
PROJECT_NAME=$(gcloud config get-value core/project)
MACHINE_TYPE=n1-standard-4
NUM_NODES=3

Create the GKE cluster:

gcloud container clusters \
  create ${CLUSTER_NAME} \
  --zone ${ZONE} \
  --project ${PROJECT_NAME} \
  --machine-type ${MACHINE_TYPE} \
  --num-nodes ${NUM_NODES} \
  --enable-kubernetes-alpha \
  --no-enable-legacy-authorization

Step 2: Get Cluster Credentials

gcloud container clusters get-credentials ${CLUSTER_NAME} --zone ${ZONE} --project ${PROJECT_NAME}

Verify access to the cluster:

kubectl cluster-info
Kubernetes master is running at https://${K8S_API_IP}
GLBCDefaultBackend is running at https://${K8S_API_IP}/api/v1/namespaces/kube-system/services/default-http-backend/proxy
Heapster is running at https://${K8S_API_IP}/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://${K8S_API_IP}/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://${K8S_API_IP}/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Step 3: Create a Clusterrolebinding

You need these permissions to create the necessary RBAC rules for Istio:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"

Step 4: Export test script variables

Option 1: Build your own images. Create istio/.istiorc.mk containing environment variables to customize the E2E tests. For example:

pwd
/Users/me/code/go/src/istio.io/istio

cat .istiorc.mk
HUB=docker.io/danehans
TAG=e2e

Build images on the local docker

make docker

Push images to docker registry

If you use minikube and its docker environment, images will be available in minikube for use, you can skip this step.

make push

The hub/tag set in your .istiorc.mk will be used by the test.

Option 2: Already committed changes to istio/istio master branch NOTE: SHA used as TAG is one that is already committed on istio/istio. You can pick any SHA you want.

export HUB="gcr.io/istio-testing"
export TAG="d0142e1afe41c18917018e2fa85ab37254f7e0ca"

Option 3: Testing local changes

If you want to test on uncommitted changes to master istio:

  • Create a PR with your change.

  • This will trigger istio-presubmit.sh. At the end of this script, it creates docker images for mixer, pilot, ca, with your changes and upload them to container registry. See the logs of this istio-presubmit.sh and at the end there must be a SHA which you need to copy and set it as a GIT_SHA. Example from a log: the SHA to copy is marked in bold

    I1207 04:42:40.881] 0077bb73e0b9d2841f8c299f15305193e42dae0d: digest: sha256:6f72528d475be56e8392bc3b833b94a815a1fbab8a70cd058b92982e61364021 size: 528

  • Then set the export variables again

export HUB="gcr.io/istio-testing"
export TAG="<sha copied from the logs of istio-presubmit.sh>"

E2E Test Details

istio/Makefile calls additional .mk files:

include tools/deb/istio.mk
include tests/istio.mk

Default e2e tests use Minikube to run a k8s cluster. Update the environment variable used to specify the test environment:

export TEST_ENV=minikube

CCP Test Workflow

This workflow assumes CCP will use official Istio releases. The Istio download script allows you to download a specific release. For example, download 0.7.0 instead of the latest release:

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=0.7.0 sh -

Change to the istio package directory:

cd istio-0.7.0

Add the istioctl client to your PATH and set the ISTIOCTL environent variable:

ISTIOCTL=$PWD/bin/istioctl
export PATH=$PWD/bin:$PATH

Clone the Istio repo:

git clone -b release-0.7 https://github.com/istio/istio.git && cd istio

Set the env var's for the e2e suite:

HUB=docker.io/istio
TAG=0.7.0

Run the e2e test suite, passing in the appropriate E2E_ARGS flags:

make e2e_all E2E_ARGS="--auth_enable --istioctl $ISTIOCTL"

Note: Cisco WSA (Web Proxy) prevents tests from passing. Do not run tests while connected to the Cisco network.

$ make e2e_simple E2E_ARGS="--auth_enable --istioctl $ISTIOCTL"
bin/gobuild.sh /Users/daneyonhansen/code/go/out/darwin_amd64/release/istioctl istio.io/istio/pkg/version ./istioctl/cmd/istioctl
real 0m0.543s
user 0m0.658s
sys 0m0.592s
./install/updateVersion.sh -a istio,0.7.0 >/dev/null 2>&1
go test -v -timeout 20m ./tests/e2e/tests/simple -args --auth_enable --istioctl /Users/daneyonhansen/Desktop/istio/istio-0.7.0/bin/istioctl --istioctl /Users/daneyonhansen/code/go/out/darwin_amd64/release/istioctl --mixer_tag 0.7.0 --pilot_tag 0.7.0 --proxy_tag 0.7.0 --ca_tag 0.7.0 --mixer_hub istio --pilot_hub istio --proxy_hub istio --ca_hub istio
2018-04-19T18:07:28.941738Z info Logging initialized
2018-04-19T18:07:28.942213Z info Using temp dir /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995
2018-04-19T18:07:28.942286Z info Using release dir: /Users/daneyonhansen/code/go/src/istio.io/istio
2018-04-19T18:07:28.942315Z info Fortio hub tag -> image istio/fortio:latest_release
2018-04-19T18:07:28.942336Z info Starting Initialization
2018-04-19T18:07:28.942344Z info Setting up kubeInfo
2018-04-19T18:07:28.943982Z info Running command kubectl apply -n simple-auth-test-82ddaa0faa824395a926 -f /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/yaml/istio-one-namespace-auth.yaml
2018-04-19T18:07:38.377318Z info Command output:
namespace "simple-auth-test-82ddaa0faa824395a926" created
clusterrole "istio-pilot-simple-auth-test-82ddaa0faa824395a926" created
clusterrole "istio-sidecar-injector-simple-auth-test-82ddaa0faa824395a926" created
clusterrole "istio-mixer-simple-auth-test-82ddaa0faa824395a926" created
clusterrole "istio-mixer-validator-simple-auth-test-82ddaa0faa824395a926" created
clusterrole "istio-ca-simple-auth-test-82ddaa0faa824395a926" created
clusterrole "istio-sidecar-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-pilot-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-sidecar-injector-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-ca-role-binding-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-ingress-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-sidecar-role-binding-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-mixer-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" created
clusterrolebinding "istio-mixer-validator-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" created
configmap "istio-mixer" created
service "istio-mixer" created
serviceaccount "istio-mixer-service-account" created
deployment "istio-mixer" created
customresourcedefinition "rules.config.istio.io" configured
customresourcedefinition "attributemanifests.config.istio.io" configured
customresourcedefinition "circonuses.config.istio.io" configured
customresourcedefinition "deniers.config.istio.io" configured
customresourcedefinition "fluentds.config.istio.io" configured
customresourcedefinition "kubernetesenvs.config.istio.io" configured
customresourcedefinition "listcheckers.config.istio.io" configured
customresourcedefinition "memquotas.config.istio.io" configured
customresourcedefinition "noops.config.istio.io" configured
customresourcedefinition "opas.config.istio.io" configured
customresourcedefinition "prometheuses.config.istio.io" configured
customresourcedefinition "rbacs.config.istio.io" configured
customresourcedefinition "servicecontrols.config.istio.io" configured
customresourcedefinition "solarwindses.config.istio.io" configured
customresourcedefinition "stackdrivers.config.istio.io" configured
customresourcedefinition "statsds.config.istio.io" configured
customresourcedefinition "stdios.config.istio.io" configured
customresourcedefinition "apikeys.config.istio.io" configured
customresourcedefinition "authorizations.config.istio.io" configured
customresourcedefinition "checknothings.config.istio.io" configured
customresourcedefinition "kuberneteses.config.istio.io" configured
customresourcedefinition "listentries.config.istio.io" configured
customresourcedefinition "logentries.config.istio.io" configured
customresourcedefinition "metrics.config.istio.io" configured
customresourcedefinition "quotas.config.istio.io" configured
customresourcedefinition "reportnothings.config.istio.io" configured
customresourcedefinition "servicecontrolreports.config.istio.io" configured
customresourcedefinition "tracespans.config.istio.io" configured
customresourcedefinition "serviceroles.config.istio.io" configured
customresourcedefinition "servicerolebindings.config.istio.io" configured
attributemanifest "istioproxy" created
attributemanifest "kubernetes" created
stdio "handler" created
logentry "accesslog" created
rule "stdio" created
metric "requestcount" created
metric "requestduration" created
metric "requestsize" created
metric "responsesize" created
metric "tcpbytesent" created
metric "tcpbytereceived" created
prometheus "handler" created
rule "promhttp" created
rule "promtcp" created
kubernetesenv "handler" created
rule "kubeattrgenrulerule" created
rule "tcpkubeattrgenrulerule" created
kubernetes "attributes" created
configmap "istio" created
customresourcedefinition "destinationpolicies.config.istio.io" configured
customresourcedefinition "egressrules.config.istio.io" configured
customresourcedefinition "routerules.config.istio.io" configured
customresourcedefinition "virtualservices.networking.istio.io" configured
customresourcedefinition "destinationrules.networking.istio.io" configured
customresourcedefinition "externalservices.networking.istio.io" configured
service "istio-pilot" created
serviceaccount "istio-pilot-service-account" created
deployment "istio-pilot" created
service "istio-ingress" created
serviceaccount "istio-ingress-service-account" created
deployment "istio-ingress" created
serviceaccount "istio-ca-service-account" created
deployment "istio-ca" created
2018-04-19T18:07:38.377415Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get deployment -o name
2018-04-19T18:07:38.716637Z info Command output:
deployments/istio-ca
deployments/istio-ingress
deployments/istio-mixer
deployments/istio-pilot
2018-04-19T18:07:38.716807Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-pilot
2018-04-19T18:07:38.716908Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-mixer
2018-04-19T18:07:38.716919Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-ca
2018-04-19T18:07:38.716940Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-ingress
2018-04-19T18:07:39.546379Z info Running command kubectl apply -n simple-auth-test-82ddaa0faa824395a926 -f /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/yaml/prometheus.yaml
2018-04-19T18:07:40.857176Z info Command output:
configmap "prometheus" created
service "prometheus" created
deployment "prometheus" created
serviceaccount "prometheus" created
clusterrole "prometheus" configured
clusterrolebinding "prometheus" configured
2018-04-19T18:07:40.857606Z info Running command kubectl apply -n simple-auth-test-82ddaa0faa824395a926 -f /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/yaml/zipkin.yaml
2018-04-19T18:07:41.518061Z info Command output:
deployment "zipkin" created
service "zipkin" created
2018-04-19T18:07:41.518125Z info Setting up istioctl
2018-04-19T18:07:41.518152Z info Setting up apps
2018-04-19T18:07:41.518209Z info Setup &{/Users/daneyonhansen/code/go/src/istio.io/istio/tests/e2e/tests/simple/servicesToBeInjected.yaml true 0xc420568100}
2018-04-19T18:07:41.520978Z info Created /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/servicesToBeInjected.yaml303209429.yaml from template /Users/daneyonhansen/code/go/src/istio.io/istio/tests/e2e/tests/simple/servicesToBeInjected.yaml
2018-04-19T18:07:41.521332Z info Running command /Users/daneyonhansen/code/go/out/darwin_amd64/release/istioctl kube-inject -f /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/servicesToBeInjected.yaml303209429.yaml -o /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/KubeInject376724784.yaml --hub istio --tag 0.7.0 -n simple-auth-test-82ddaa0faa824395a926 -i simple-auth-test-82ddaa0faa824395a926 --meshConfigMapName=istio
2018-04-19T18:07:41.781962Z info Running command kubectl apply -n simple-auth-test-82ddaa0faa824395a926 -f /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/KubeInject376724784.yaml
2018-04-19T18:07:42.763319Z info Command output:
service "echosrv" created
ingress "istio-ingress" created
routerule "fortio-redir" created
deployment "echosrv-deployment" created
2018-04-19T18:07:42.763414Z info Setup &{/Users/daneyonhansen/code/go/src/istio.io/istio/tests/e2e/tests/simple/servicesNotInjected.yaml false 0xc420568110}
2018-04-19T18:07:42.764384Z info Created /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/servicesNotInjected.yaml523343055.yaml from template /Users/daneyonhansen/code/go/src/istio.io/istio/tests/e2e/tests/simple/servicesNotInjected.yaml
2018-04-19T18:07:42.764422Z info Running command kubectl apply -n simple-auth-test-82ddaa0faa824395a926 -f /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/servicesNotInjected.yaml523343055.yaml
2018-04-19T18:07:43.374277Z info Command output:
service "fortio-noistio" created
deployment "raw-cli-deployement" created
2018-04-19T18:07:43.374357Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get deployment -o name
2018-04-19T18:07:43.723532Z info Command output:
deployments/echosrv-deployment
deployments/istio-ca
deployments/istio-ingress
deployments/istio-mixer
deployments/istio-pilot
deployments/prometheus
deployments/raw-cli-deployement
deployments/zipkin
2018-04-19T18:07:43.723672Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/echosrv-deployment
2018-04-19T18:07:43.723696Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-ingress
2018-04-19T18:07:43.723731Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/raw-cli-deployement
2018-04-19T18:07:43.723753Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-pilot
2018-04-19T18:07:43.723836Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/prometheus
2018-04-19T18:07:43.723857Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-mixer
2018-04-19T18:07:43.723705Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/zipkin
2018-04-19T18:07:43.723686Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 rollout status deployments/istio-ca
2018-04-19T18:07:47.776685Z info Initialization complete
2018-04-19T18:07:47.776783Z info Running test
=== RUN TestSimpleIngress
2018-04-19T18:07:47.777057Z info Waiting for istio-ingress to get external IP
2018-04-19T18:08:32.369392Z info Istio ingress: 35.227.153.70
2018-04-19T18:08:32.369471Z info Sanity checking http://35.227.153.70
2018-04-19T18:09:07.131254Z info Response 404 "404 Not Found" received from http://35.227.153.70
2018-04-19T18:09:07.131315Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods -o jsonpath='{.items[*].metadata.name}'
2018-04-19T18:09:07.523792Z info Command output:
echosrv-deployment-55dc578cdc-5j96b echosrv-deployment-55dc578cdc-xqdb7 istio-ca-59d49f6d8c-97xsj istio-ingress-569574579c-4dg8t istio-mixer-f4d47b46b-4q2jq istio-pilot-5dc7bd5d6-t8mct prometheus-7c6d778564-s4j7f raw-cli-deployement-644cfd5b77-2ds9n zipkin-55ccd7c684-sh7hb
2018-04-19T18:09:07.523862Z info Existing pods: [echosrv-deployment-55dc578cdc-5j96b echosrv-deployment-55dc578cdc-xqdb7 istio-ca-59d49f6d8c-97xsj istio-ingress-569574579c-4dg8t istio-mixer-f4d47b46b-4q2jq istio-pilot-5dc7bd5d6-t8mct prometheus-7c6d778564-s4j7f raw-cli-deployement-644cfd5b77-2ds9n zipkin-55ccd7c684-sh7hb]
2018-04-19T18:09:07.523875Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods echosrv-deployment-55dc578cdc-5j96b --no-headers
2018-04-19T18:09:07.833522Z info Command output:
echosrv-deployment-55dc578cdc-5j96b 2/2 Running 0 1m
2018-04-19T18:09:07.833596Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods echosrv-deployment-55dc578cdc-xqdb7 --no-headers
2018-04-19T18:09:08.129715Z info Command output:
echosrv-deployment-55dc578cdc-xqdb7 2/2 Running 0 1m
2018-04-19T18:09:08.129795Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods istio-ca-59d49f6d8c-97xsj --no-headers
2018-04-19T18:09:08.440601Z info Command output:
istio-ca-59d49f6d8c-97xsj 1/1 Running 0 1m
2018-04-19T18:09:08.440683Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods istio-ingress-569574579c-4dg8t --no-headers
2018-04-19T18:09:08.736658Z info Command output:
istio-ingress-569574579c-4dg8t 1/1 Running 0 1m
2018-04-19T18:09:08.736736Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods istio-mixer-f4d47b46b-4q2jq --no-headers
2018-04-19T18:09:09.043296Z info Command output:
istio-mixer-f4d47b46b-4q2jq 3/3 Running 0 1m
2018-04-19T18:09:09.043370Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods istio-pilot-5dc7bd5d6-t8mct --no-headers
2018-04-19T18:09:09.378847Z info Command output:
istio-pilot-5dc7bd5d6-t8mct 2/2 Running 0 1m
2018-04-19T18:09:09.378924Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods prometheus-7c6d778564-s4j7f --no-headers
2018-04-19T18:09:09.685148Z info Command output:
prometheus-7c6d778564-s4j7f 1/1 Running 0 1m
2018-04-19T18:09:09.685257Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods raw-cli-deployement-644cfd5b77-2ds9n --no-headers
2018-04-19T18:09:09.977903Z info Command output:
raw-cli-deployement-644cfd5b77-2ds9n 1/1 Running 0 1m
2018-04-19T18:09:09.977977Z info Running command kubectl -n simple-auth-test-82ddaa0faa824395a926 get pods zipkin-55ccd7c684-sh7hb --no-headers
2018-04-19T18:09:10.282019Z info Command output:
zipkin-55ccd7c684-sh7hb 1/1 Running 0 1m
2018-04-19T18:09:10.282091Z info Get all pods running!
2018-04-19T18:09:10.282117Z info Fetching 'http://35.227.153.70/fortio/debug'
2018-04-19T18:09:10.395391Z info Iter 0 : ingress->Svc is up! Found 788: HTTP/1.1 200 OK\r\ncontent-type: text/plain; charset=UTF-8\r\ndate: Thu, 19 Apr 2018 18:09:10 GMT\r\ncontent-length: 620\r\nx-envoy-upstream-service-time: 23\r\nserver: envoy\r\n\r\nΦορτίο version 0.9.0 2018-04-06 06:57 e50943e8e525197f36f9b4f81464615c063a0a65 go1...0e6fd89319fec7\nX-Envoy-Internal: true\nX-Envoy-Original-Path: /fortio/debug\nX-Forwarded-For: 10.32.1.1\nX-Forwarded-Proto: http\nX-Ot-Span-Context: 850e6fd89319fec7;850e6fd89319fec7;0000000000000000\nX-Request-Id: 90a94cb5-e2ab-9804-8784-53c1ec364bd4\n\nbody:\n\n\n
--- PASS: TestSimpleIngress (82.62s)
=== RUN TestSvc2Svc
2018-04-19T18:09:10.395543Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 -l app=echosrv -o jsonpath={.items[*].metadata.name}
2018-04-19T18:09:10.748363Z info Command output:
echosrv-deployment-55dc578cdc-5j96b echosrv-deployment-55dc578cdc-xqdb7
2018-04-19T18:09:10.748515Z info Configuration readiness pre-check from [echosrv-deployment-55dc578cdc-5j96b echosrv-deployment-55dc578cdc-xqdb7] to http://echosrv:8080/echo
2018-04-19T18:09:10.748553Z info Running command kubectl exec -n simple-auth-test-82ddaa0faa824395a926 echosrv-deployment-55dc578cdc-5j96b -c echosrv -- /usr/local/bin/fortio curl http://echosrv:8080/echo
2018-04-19T18:09:11.418365Z info Command output:
HTTP/1.1 200 OK
content-length: 0
date: Thu, 19 Apr 2018 18:09:11 GMT
x-envoy-upstream-service-time: 10
server: envoy
2018-04-19T18:09:11.418456Z info Running command kubectl exec -n simple-auth-test-82ddaa0faa824395a926 echosrv-deployment-55dc578cdc-xqdb7 -c echosrv -- /usr/local/bin/fortio curl http://echosrv:8080/echo
2018-04-19T18:09:12.108256Z info Command output:
HTTP/1.1 200 OK
content-length: 0
date: Thu, 19 Apr 2018 18:09:11 GMT
x-envoy-upstream-service-time: 8
server: envoy
2018-04-19T18:09:12.108319Z info All 2 pods ready
2018-04-19T18:09:12.108335Z info From pod "echosrv-deployment-55dc578cdc-5j96b"
2018-04-19T18:09:12.108370Z info Running command kubectl exec -n simple-auth-test-82ddaa0faa824395a926 echosrv-deployment-55dc578cdc-5j96b -c echosrv -- /usr/local/bin/fortio load -qps 0 -t 10s http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo
2018-04-19T18:09:22.911586Z info Command output:
Fortio 0.9.0 running at 0 queries per second, 4->4 procs, for 10s: http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo
18:09:12 I httprunner.go:82> Starting http test for http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo with 4 threads at -1.0 qps
Starting at max qps with 4 thread(s) [gomax 4] for 10s
18:09:22 I periodic.go:533> T002 ended after 10.000191149s : 4256 calls. qps=425.59186485406246
18:09:22 I periodic.go:533> T003 ended after 10.000977384s : 5031 calls. qps=503.0508326165014
18:09:22 I periodic.go:533> T001 ended after 10.001049246s : 4309 calls. qps=430.8547927332144
18:09:22 I periodic.go:533> T000 ended after 10.001593537s : 4340 calls. qps=433.9308515132672
Ended after 10.001822536s : 17936 calls. qps=1793.3
Aggregated Function Time : count 17936 avg 0.0022299357 +/- 0.002325 min 0.000652274 max 0.104993145 sum 39.9961265
# range, mid point, percentile, count
>= 0.000652274 <= 0.001 , 0.000826137 , 5.11, 916
> 0.001 <= 0.002 , 0.0015 , 60.16, 9874
> 0.002 <= 0.003 , 0.0025 , 85.99, 4634
> 0.003 <= 0.004 , 0.0035 , 92.99, 1255
> 0.004 <= 0.005 , 0.0045 , 95.41, 434
> 0.005 <= 0.006 , 0.0055 , 97.08, 300
> 0.006 <= 0.007 , 0.0065 , 98.26, 211
> 0.007 <= 0.008 , 0.0075 , 98.74, 86
> 0.008 <= 0.009 , 0.0085 , 99.04, 54
> 0.009 <= 0.01 , 0.0095 , 99.28, 42
> 0.01 <= 0.011 , 0.0105 , 99.40, 23
> 0.011 <= 0.012 , 0.0115 , 99.46, 11
> 0.012 <= 0.014 , 0.013 , 99.59, 22
> 0.014 <= 0.016 , 0.015 , 99.70, 20
> 0.016 <= 0.018 , 0.017 , 99.75, 10
> 0.018 <= 0.02 , 0.019 , 99.80, 9
> 0.02 <= 0.025 , 0.0225 , 99.89, 16
> 0.025 <= 0.03 , 0.0275 , 99.93, 7
> 0.03 <= 0.035 , 0.0325 , 99.97, 7
> 0.045 <= 0.05 , 0.0475 , 99.98, 1
> 0.09 <= 0.1 , 0.095 , 99.98, 1
> 0.1 <= 0.104993 , 0.102497 , 100.00, 3
# target 50% 0.00181547
# target 75% 0.00257445
# target 90% 0.00357243
# target 99% 0.0088637
# target 99.9% 0.02576
Sockets used: 4 (for perfect keepalive, would be 4)
Code 200 : 17936 (100.0 %)
Response Header Sizes : count 17936 avg 124.00385 +/- 0.0619 min 124 max 125 sum 2224133
Response Body/Total Sizes : count 17936 avg 124.00385 +/- 0.0619 min 124 max 125 sum 2224133
All done 17936 calls (plus 4 warmup) 2.230 ms avg, 1793.3 qps
2018-04-19T18:09:22.911679Z info From pod "echosrv-deployment-55dc578cdc-xqdb7"
2018-04-19T18:09:22.911734Z info Running command kubectl exec -n simple-auth-test-82ddaa0faa824395a926 echosrv-deployment-55dc578cdc-xqdb7 -c echosrv -- /usr/local/bin/fortio load -qps 0 -t 10s http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo
2018-04-19T18:09:33.617900Z info Command output:
Fortio 0.9.0 running at 0 queries per second, 4->4 procs, for 10s: http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo
18:09:23 I httprunner.go:82> Starting http test for http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo with 4 threads at -1.0 qps
Starting at max qps with 4 thread(s) [gomax 4] for 10s
18:09:33 I periodic.go:533> T001 ended after 10.000625408s : 5770 calls. qps=576.9639162151088
18:09:33 I periodic.go:533> T003 ended after 10.000708808s : 5573 calls. qps=557.2605009298857
18:09:33 I periodic.go:533> T000 ended after 10.001177046s : 5799 calls. qps=579.8317511356653
18:09:33 I periodic.go:533> T002 ended after 10.00151579s : 5535 calls. qps=553.4161137388957
Ended after 10.001562497s : 22677 calls. qps=2267.3
Aggregated Function Time : count 22677 avg 0.0017636064 +/- 0.002172 min 0.000605877 max 0.14538776 sum 39.9933033
# range, mid point, percentile, count
>= 0.000605877 <= 0.001 , 0.000802938 , 10.80, 2449
> 0.001 <= 0.002 , 0.0015 , 80.07, 15709
> 0.002 <= 0.003 , 0.0025 , 93.77, 3107
> 0.003 <= 0.004 , 0.0035 , 96.94, 717
> 0.004 <= 0.005 , 0.0045 , 98.11, 266
> 0.005 <= 0.006 , 0.0055 , 98.88, 174
> 0.006 <= 0.007 , 0.0065 , 99.22, 77
> 0.007 <= 0.008 , 0.0075 , 99.35, 31
> 0.008 <= 0.009 , 0.0085 , 99.48, 29
> 0.009 <= 0.01 , 0.0095 , 99.55, 16
> 0.01 <= 0.011 , 0.0105 , 99.59, 10
> 0.011 <= 0.012 , 0.0115 , 99.66, 14
> 0.012 <= 0.014 , 0.013 , 99.71, 13
> 0.014 <= 0.016 , 0.015 , 99.75, 8
> 0.016 <= 0.018 , 0.017 , 99.78, 7
> 0.018 <= 0.02 , 0.019 , 99.79, 3
> 0.02 <= 0.025 , 0.0225 , 99.88, 19
> 0.025 <= 0.03 , 0.0275 , 99.92, 10
> 0.03 <= 0.035 , 0.0325 , 99.94, 5
> 0.035 <= 0.04 , 0.0375 , 99.98, 8
> 0.045 <= 0.05 , 0.0475 , 99.98, 1
> 0.06 <= 0.07 , 0.065 , 99.99, 1
> 0.07 <= 0.08 , 0.075 , 99.99, 1
> 0.14 <= 0.145388 , 0.142694 , 100.00, 2
# target 50% 0.00156589
# target 75% 0.00192678
# target 90% 0.00272459
# target 99% 0.00636662
# target 99.9% 0.0276615
Sockets used: 4 (for perfect keepalive, would be 4)
Code 200 : 22677 (100.0 %)
Response Header Sizes : count 22677 avg 124.00397 +/- 0.06426 min 124 max 126 sum 2812038
Response Body/Total Sizes : count 22677 avg 124.00397 +/- 0.06426 min 124 max 126 sum 2812038
All done 22677 calls (plus 4 warmup) 1.764 ms avg, 2267.3 qps
--- PASS: TestSvc2Svc (23.22s)
=== RUN TestAuth
2018-04-19T18:09:33.618173Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 -l app=fortio-noistio -o jsonpath={.items[*].metadata.name}
2018-04-19T18:09:33.910491Z info Command output:
raw-cli-deployement-644cfd5b77-2ds9n
2018-04-19T18:09:33.910570Z info From client, non istio injected pod "raw-cli-deployement-644cfd5b77-2ds9n"
2018-04-19T18:09:33.910599Z info Running command kubectl exec -n simple-auth-test-82ddaa0faa824395a926 raw-cli-deployement-644cfd5b77-2ds9n -- /usr/local/bin/fortio load -qps 5 -t 1s http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo
2018-04-19T18:09:34.560702Z info Command output:
Fortio 0.9.0 running at 5 queries per second, 4->4 procs, for 1s: http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo
18:09:34 I httprunner.go:82> Starting http test for http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo with 4 threads at 5.0 qps
18:09:34 E http_client.go:566> Read error &{{0xc4202cc300}} {10.35.253.167 8080 } 0 : read tcp 10.32.1.18:52060->10.35.253.167:8080: read: connection reset by peer
Aborting because error -1 for http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo: ""
2018-04-19T18:09:34.560833Z info Command error: exit status 1
2018-04-19T18:09:34.560930Z info Got expected error with auth on and non istio->istio connection: command failed: "Fortio 0.9.0 running at 5 queries per second, 4->4 procs, for 1s: http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo\n18:09:34 I httprunner.go:82> Starting http test for http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo with 4 threads at 5.0 qps\n18:09:34 E http_client.go:566> Read error &{{0xc4202cc300}} {10.35.253.167 8080 } 0 : read tcp 10.32.1.18:52060->10.35.253.167:8080: read: connection reset by peer\nAborting because error -1 for http://echosrv.simple-auth-test-82ddaa0faa824395a926:8080/echo: \"\"\n" exit status 1
--- PASS: TestAuth (0.94s)
PASS
2018-04-19T18:09:34.561086Z info Saving logs
2018-04-19T18:09:34.561118Z info Creating status file
2018-04-19T18:09:34.561769Z info Created Status file /var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/istio.e2e.861285995/simple_auth_test.json
2018-04-19T18:09:34.561792Z info Running command kubectl get ingress --all-namespaces
2018-04-19T18:09:34.855807Z info Command output:
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
simple-auth-test-82ddaa0faa824395a926 istio-ingress * 35.227.153.70 80 1m
2018-04-19T18:09:34.855877Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926
2018-04-19T18:09:35.232015Z info Command output:
NAME READY STATUS RESTARTS AGE
echosrv-deployment-55dc578cdc-5j96b 2/2 Running 0 1m
echosrv-deployment-55dc578cdc-xqdb7 2/2 Running 0 1m
istio-ca-59d49f6d8c-97xsj 1/1 Running 0 1m
istio-ingress-569574579c-4dg8t 1/1 Running 0 1m
istio-mixer-f4d47b46b-4q2jq 3/3 Running 0 2m
istio-pilot-5dc7bd5d6-t8mct 2/2 Running 0 1m
prometheus-7c6d778564-s4j7f 1/1 Running 0 1m
raw-cli-deployement-644cfd5b77-2ds9n 1/1 Running 0 1m
zipkin-55ccd7c684-sh7hb 1/1 Running 0 1m
2018-04-19T18:09:35.232158Z info Fetching logs on echosrv-deployment-55dc578cdc-5j96b
2018-04-19T18:09:35.232185Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 echosrv-deployment-55dc578cdc-5j96b -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:35.593895Z info Command output:
echosrv istio-proxy
2018-04-19T18:09:35.594235Z info Running command kubectl logs echosrv-deployment-55dc578cdc-5j96b -n simple-auth-test-82ddaa0faa824395a926 -c echosrv
2018-04-19T18:09:35.944161Z info Running command kubectl logs echosrv-deployment-55dc578cdc-5j96b -n simple-auth-test-82ddaa0faa824395a926 -c istio-proxy
2018-04-19T18:09:36.565948Z info Fetching logs on echosrv-deployment-55dc578cdc-xqdb7
2018-04-19T18:09:36.566036Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 echosrv-deployment-55dc578cdc-xqdb7 -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:36.853205Z info Command output:
echosrv istio-proxy
2018-04-19T18:09:36.853523Z info Running command kubectl logs echosrv-deployment-55dc578cdc-xqdb7 -n simple-auth-test-82ddaa0faa824395a926 -c echosrv
2018-04-19T18:09:37.176741Z info Running command kubectl logs echosrv-deployment-55dc578cdc-xqdb7 -n simple-auth-test-82ddaa0faa824395a926 -c istio-proxy
2018-04-19T18:09:37.929909Z info Fetching logs on istio-ca-59d49f6d8c-97xsj
2018-04-19T18:09:37.929996Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 istio-ca-59d49f6d8c-97xsj -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:38.221014Z info Command output:
istio-ca
2018-04-19T18:09:38.221285Z info Running command kubectl logs istio-ca-59d49f6d8c-97xsj -n simple-auth-test-82ddaa0faa824395a926 -c istio-ca
2018-04-19T18:09:38.578677Z info Fetching logs on istio-ingress-569574579c-4dg8t
2018-04-19T18:09:38.578821Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 istio-ingress-569574579c-4dg8t -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:38.887108Z info Command output:
istio-ingress
2018-04-19T18:09:38.887836Z info Running command kubectl logs istio-ingress-569574579c-4dg8t -n simple-auth-test-82ddaa0faa824395a926 -c istio-ingress
2018-04-19T18:09:39.293681Z info Fetching logs on istio-mixer-f4d47b46b-4q2jq
2018-04-19T18:09:39.293750Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 istio-mixer-f4d47b46b-4q2jq -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:39.591690Z info Command output:
statsd-to-prometheus mixer istio-proxy
2018-04-19T18:09:39.592051Z info Running command kubectl logs istio-mixer-f4d47b46b-4q2jq -n simple-auth-test-82ddaa0faa824395a926 -c statsd-to-prometheus
2018-04-19T18:09:39.945293Z info Running command kubectl logs istio-mixer-f4d47b46b-4q2jq -n simple-auth-test-82ddaa0faa824395a926 -c mixer
2018-04-19T18:09:40.551473Z info Running command kubectl logs istio-mixer-f4d47b46b-4q2jq -n simple-auth-test-82ddaa0faa824395a926 -c istio-proxy
2018-04-19T18:09:40.894994Z info Fetching logs on istio-pilot-5dc7bd5d6-t8mct
2018-04-19T18:09:40.895102Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 istio-pilot-5dc7bd5d6-t8mct -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:41.194717Z info Command output:
discovery istio-proxy
2018-04-19T18:09:41.195019Z info Running command kubectl logs istio-pilot-5dc7bd5d6-t8mct -n simple-auth-test-82ddaa0faa824395a926 -c discovery
2018-04-19T18:09:41.640155Z info Running command kubectl logs istio-pilot-5dc7bd5d6-t8mct -n simple-auth-test-82ddaa0faa824395a926 -c istio-proxy
2018-04-19T18:09:41.998437Z info Fetching logs on prometheus-7c6d778564-s4j7f
2018-04-19T18:09:41.998497Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 prometheus-7c6d778564-s4j7f -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:42.289454Z info Command output:
prometheus
2018-04-19T18:09:42.289708Z info Running command kubectl logs prometheus-7c6d778564-s4j7f -n simple-auth-test-82ddaa0faa824395a926 -c prometheus
2018-04-19T18:09:42.639151Z info Fetching logs on raw-cli-deployement-644cfd5b77-2ds9n
2018-04-19T18:09:42.639218Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 raw-cli-deployement-644cfd5b77-2ds9n -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:42.945355Z info Command output:
fortio-noistio
2018-04-19T18:09:42.945638Z info Running command kubectl logs raw-cli-deployement-644cfd5b77-2ds9n -n simple-auth-test-82ddaa0faa824395a926 -c fortio-noistio
2018-04-19T18:09:43.270569Z info Fetching logs on zipkin-55ccd7c684-sh7hb
2018-04-19T18:09:43.270634Z info Running command kubectl get pods -n simple-auth-test-82ddaa0faa824395a926 zipkin-55ccd7c684-sh7hb -o jsonpath={.spec.containers[*].name}
2018-04-19T18:09:43.564395Z info Command output:
zipkin
2018-04-19T18:09:43.564760Z info Running command kubectl logs zipkin-55ccd7c684-sh7hb -n simple-auth-test-82ddaa0faa824395a926 -c zipkin
2018-04-19T18:09:43.951833Z info Fetching deployment info on pod
2018-04-19T18:09:43.951900Z info Running command kubectl get pod -n simple-auth-test-82ddaa0faa824395a926 -o yaml
2018-04-19T18:09:44.345741Z info Fetching deployment info on service
2018-04-19T18:09:44.345810Z info Running command kubectl get service -n simple-auth-test-82ddaa0faa824395a926 -o yaml
2018-04-19T18:09:44.635411Z info Fetching deployment info on ingress
2018-04-19T18:09:44.635469Z info Running command kubectl get ingress -n simple-auth-test-82ddaa0faa824395a926 -o yaml
2018-04-19T18:09:44.921394Z info Starting Cleanup
2018-04-19T18:09:44.921444Z info Cleaning up istioctl
2018-04-19T18:09:44.921456Z info Cleaning up kubeInfo
2018-04-19T18:09:44.921481Z info Running command kubectl delete namespace simple-auth-test-82ddaa0faa824395a926
2018-04-19T18:09:45.232815Z info Command output:
namespace "simple-auth-test-82ddaa0faa824395a926" deleted
2018-04-19T18:09:45.232916Z info Running command kubectl get clusterrolebinding -o jsonpath={.items[*].metadata.name}|xargs -n 1|fgrep simple-auth-test-82ddaa0faa824395a926|xargs kubectl delete clusterrolebinding
2018-04-19T18:09:46.337026Z info Command output:
clusterrolebinding "istio-ca-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrolebinding "istio-ingress-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrolebinding "istio-mixer-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrolebinding "istio-mixer-validator-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrolebinding "istio-pilot-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrolebinding "istio-sidecar-injector-admin-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrolebinding "istio-sidecar-role-binding-simple-auth-test-82ddaa0faa824395a926" deleted
2018-04-19T18:09:46.337113Z info Running command kubectl get clusterrole -o jsonpath={.items[*].metadata.name}|xargs -n 1|fgrep simple-auth-test-82ddaa0faa824395a926|xargs kubectl delete clusterrole
2018-04-19T18:09:47.402496Z info Command output:
clusterrole "istio-ca-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrole "istio-mixer-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrole "istio-mixer-validator-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrole "istio-pilot-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrole "istio-sidecar-injector-simple-auth-test-82ddaa0faa824395a926" deleted
clusterrole "istio-sidecar-simple-auth-test-82ddaa0faa824395a926" deleted
2018-04-19T18:09:47.402572Z info Deleting namespace simple-auth-test-82ddaa0faa824395a926
2018-04-19T18:10:32.290599Z info Namespace simple-auth-test-82ddaa0faa824395a926 deletion status: true
2018-04-19T18:10:32.290664Z info Cleanup complete
ok istio.io/istio/tests/e2e/tests/simple 183.424s
# make e2e_simple E2E_ARGS="--auth_enable --istioctl /root/istio-0.7.0/bin/istioctl"
bin/gobuild.sh /root/go/out/linux_amd64/release/istioctl istio.io/istio/pkg/version ./istioctl/cmd/istioctl
real 0m45.613s
user 1m13.408s
sys 0m6.788s
./install/updateVersion.sh -a docker.io/istio,0.7.0
/tmp/templates ~/go/src/istio.io/istio
~/go/src/istio.io/istio
/tmp/templates/addons ~/go/src/istio.io/istio
~/go/src/istio.io/istio
/tmp/templates ~/go/src/istio.io/istio
~/go/src/istio.io/istio
/tmp/templates ~/go/src/istio.io/istio
~/go/src/istio.io/istio
-a docker.io/istio,0.7.0
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio.yaml
/root/go/out/linux_amd64/release/helm template --set global.tag=0.7.0 \
--namespace=istio-system \
--set global.hub=docker.io/istio \
--values install/kubernetes/helm/istio/values-istio.yaml \
install/kubernetes/helm/istio >> install/kubernetes/istio.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio-auth.yaml
/root/go/out/linux_amd64/release/helm template --set global.tag=0.7.0 \
--namespace=istio-system \
--set global.hub=docker.io/istio \
--values install/kubernetes/helm/istio/values-istio-auth.yaml \
install/kubernetes/helm/istio >> install/kubernetes/istio-auth.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio-one-namespace.yaml
/root/go/out/linux_amd64/release/helm template --set global.tag=0.7.0 \
--namespace=istio-system \
--set global.hub=docker.io/istio \
--values install/kubernetes/helm/istio/values-istio-one-namespace.yaml \
install/kubernetes/helm/istio >> install/kubernetes/istio-one-namespace.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio-one-namespace-auth.yaml
/root/go/out/linux_amd64/release/helm template --set global.tag=0.7.0 \
--namespace=istio-system \
--set global.hub=docker.io/istio \
--values install/kubernetes/helm/istio/values-istio-one-namespace-auth.yaml \
install/kubernetes/helm/istio >> install/kubernetes/istio-one-namespace-auth.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio-multicluster.yaml
/root/go/out/linux_amd64/release/helm template --set global.tag=0.7.0 \
--namespace=istio-system \
--set global.hub=docker.io/istio \
--values install/kubernetes/helm/istio/values-istio-multicluster.yaml \
install/kubernetes/helm/istio >> install/kubernetes/istio-multicluster.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio-auth-multicluster.yaml
/root/go/out/linux_amd64/release/helm template --set global.tag=0.7.0 \
--namespace=istio-system \
--set global.hub=docker.io/istio \
--values install/kubernetes/helm/istio/values-istio-auth-multicluster.yaml \
install/kubernetes/helm/istio >> install/kubernetes/istio-auth-multicluster.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
make[1]: Entering directory '/root/go/src/istio.io/istio'
cat install/kubernetes/templates/namespace.yaml > install/kubernetes/istio-remote.yaml
/root/go/out/linux_amd64/release/helm template --namespace=istio-system \
--set global.pilotEndpoint="pilotIpReplace" \
--set global.policyEndpoint="mixerIpReplace" \
install/kubernetes/helm/istio-remote >> install/kubernetes/istio-remote.yaml
make[1]: Leaving directory '/root/go/src/istio.io/istio'
go test -v -timeout 20m ./tests/e2e/tests/simple -args --auth_enable --istioctl /root/istio-0.7.0/bin/istioctl --istioctl=/root/go/out/linux_amd64/release/istioctl --mixer_tag=0.7.0 --pilot_tag=0.7.0 --proxy_tag=0.7.0 --ca_tag=0.7.0 --galley_tag=0.7.0 --mixer_hub=docker.io/istio --pilot_hub=docker.io/istio --proxy_hub=docker.io/istio --ca_hub=docker.io/istio --galley_hub=docker.io/istio
2018-04-19T22:46:14.915033Z info Logging initialized
2018-04-19T22:46:14.915102Z info Using temp dir /tmp/istio.e2e.336982973
2018-04-19T22:46:14.915267Z info Using release dir: /root/go/src/istio.io/istio
2018-04-19T22:46:14.915328Z info Fortio hub tag -> image istio/fortio:latest_release
2018-04-19T22:46:14.915335Z info Starting Initialization
2018-04-19T22:46:14.915340Z info Setting up kubeInfo setupSkip=false
2018-04-19T22:46:14.918248Z info Running command kubectl create namespace simple-auth-test-6cc4c1b0ea004dc987e1 --kubeconfig=
2018-04-19T22:46:16.236657Z info namespace simple-auth-test-6cc4c1b0ea004dc987e1 created
2018-04-19T22:46:16.236692Z info Running command kubectl apply -n simple-auth-test-6cc4c1b0ea004dc987e1 -f /tmp/istio.e2e.336982973/yaml/istio-one-namespace-auth.yaml --kubeconfig=
2018-04-19T22:46:22.326384Z info Command output:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace "simple-auth-test-6cc4c1b0ea004dc987e1" configured
configmap "istio-statsd-prom-bridge" created
configmap "istio-mixer-custom-resources" created
configmap "prometheus" created
configmap "istio" created
serviceaccount "istio-ingress-service-account" created
serviceaccount "istio-mixer-service-account" created
serviceaccount "istio-pilot-service-account" created
serviceaccount "prometheus" created
serviceaccount "istio-citadel-service-account" created
customresourcedefinition "rules.config.istio.io" created
customresourcedefinition "attributemanifests.config.istio.io" created
customresourcedefinition "circonuses.config.istio.io" created
customresourcedefinition "deniers.config.istio.io" created
customresourcedefinition "fluentds.config.istio.io" created
customresourcedefinition "kubernetesenvs.config.istio.io" created
customresourcedefinition "listcheckers.config.istio.io" created
customresourcedefinition "memquotas.config.istio.io" created
customresourcedefinition "noops.config.istio.io" created
customresourcedefinition "opas.config.istio.io" created
customresourcedefinition "prometheuses.config.istio.io" created
customresourcedefinition "rbacs.config.istio.io" created
customresourcedefinition "servicecontrols.config.istio.io" created
customresourcedefinition "solarwindses.config.istio.io" created
customresourcedefinition "stackdrivers.config.istio.io" created
customresourcedefinition "statsds.config.istio.io" created
customresourcedefinition "stdios.config.istio.io" created
customresourcedefinition "apikeys.config.istio.io" created
customresourcedefinition "authorizations.config.istio.io" created
customresourcedefinition "checknothings.config.istio.io" created
customresourcedefinition "kuberneteses.config.istio.io" created
customresourcedefinition "listentries.config.istio.io" created
customresourcedefinition "logentries.config.istio.io" created
customresourcedefinition "metrics.config.istio.io" created
customresourcedefinition "quotas.config.istio.io" created
customresourcedefinition "reportnothings.config.istio.io" created
customresourcedefinition "servicecontrolreports.config.istio.io" created
customresourcedefinition "tracespans.config.istio.io" created
customresourcedefinition "serviceroles.config.istio.io" created
customresourcedefinition "servicerolebindings.config.istio.io" created
customresourcedefinition "destinationpolicies.config.istio.io" created
customresourcedefinition "egressrules.config.istio.io" created
customresourcedefinition "routerules.config.istio.io" created
customresourcedefinition "virtualservices.networking.istio.io" created
customresourcedefinition "destinationrules.networking.istio.io" created
customresourcedefinition "externalservices.networking.istio.io" created
clusterrole "istio-ingress-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrole "istio-mixer-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrole "istio-pilot-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrole "prometheus-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrolebinding "prometheus-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrole "istio-citadel-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrolebinding "istio-ingress-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrolebinding "istio-mixer-admin-role-binding-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrolebinding "istio-pilot-simple-auth-test-6cc4c1b0ea004dc987e1" created
clusterrolebinding "istio-citadel-simple-auth-test-6cc4c1b0ea004dc987e1" created
service "istio-ingress" created
service "istio-policy" created
service "istio-telemetry" created
service "istio-statsd-prom-bridge" created
deployment "istio-statsd-prom-bridge" created
service "istio-pilot" created
service "prometheus" created
service "istio-citadel" created
deployment "istio-ingress" created
deployment "istio-policy" created
deployment "istio-telemetry" created
deployment "istio-pilot" created
deployment "prometheus" created
deployment "istio-citadel" created
job "istio-mixer-create-cr" created
horizontalpodautoscaler "istio-ingress" created
2018-04-19T22:46:22.326443Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 get deployment -o name --kubeconfig=
2018-04-19T22:46:22.507710Z info Command output:
deployments/istio-citadel
deployments/istio-ingress
deployments/istio-pilot
deployments/istio-policy
deployments/istio-statsd-prom-bridge
deployments/istio-telemetry
deployments/prometheus
2018-04-19T22:46:22.507821Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/prometheus --kubeconfig=
2018-04-19T22:46:22.508054Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/istio-pilot --kubeconfig=
2018-04-19T22:46:22.508159Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/istio-citadel --kubeconfig=
2018-04-19T22:46:22.508250Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/istio-ingress --kubeconfig=
2018-04-19T22:46:22.508329Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/istio-statsd-prom-bridge --kubeconfig=
2018-04-19T22:46:22.508411Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/istio-policy --kubeconfig=
2018-04-19T22:46:22.508479Z info Running command kubectl -n simple-auth-test-6cc4c1b0ea004dc987e1 rollout status deployments/istio-telemetry --kubeconfig=
2018-04-19T22:50:22.507887Z error Failed to deploy Istio.
2018-04-19T22:50:22.507972Z error Failed to complete Init. Error context deadline exceeded
2018-04-19T22:50:22.507985Z info Saving logs
2018-04-19T22:50:22.507993Z info Creating status file
2018-04-19T22:50:22.508219Z info Created Status file /tmp/istio.e2e.336982973/simple_auth_test.json
2018-04-19T22:50:22.508230Z info Running command kubectl get ingress --all-namespaces --kubeconfig=
2018-04-19T22:50:22.642102Z info Command output:
No resources found.
2018-04-19T22:50:22.642140Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 --kubeconfig=
2018-04-19T22:50:23.214759Z info Command output:
NAME READY STATUS RESTARTS AGE
istio-citadel-5f4655c487-gcqgx 0/1 ImagePullBackOff 0 4m
istio-ingress-68fc59b496-pvt9t 1/1 Running 0 4m
istio-pilot-c9c69f9d-tz2ch 0/2 ContainerCreating 0 4m
istio-policy-69d989bcfc-s4vzw 1/2 CrashLoopBackOff 5 4m
istio-statsd-prom-bridge-6dbb7dcc7f-k5kvj 1/1 Running 0 4m
istio-telemetry-66c58b8d68-5zlgx 1/2 CrashLoopBackOff 4 4m
prometheus-586d95b8d9-r2cdw 1/1 Running 0 4m
2018-04-19T22:50:23.214797Z info Fetching logs on istio-citadel-5f4655c487-gcqgx
2018-04-19T22:50:23.214807Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 istio-citadel-5f4655c487-gcqgx -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:23.342327Z info Command output:
citadel
2018-04-19T22:50:23.342420Z info Running command kubectl logs istio-citadel-5f4655c487-gcqgx -n simple-auth-test-6cc4c1b0ea004dc987e1 -c citadel --kubeconfig=
2018-04-19T22:50:23.515602Z info Command error: exit status 1
2018-04-19T22:50:23.515677Z info Fetching logs on istio-ingress-68fc59b496-pvt9t
2018-04-19T22:50:23.515686Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 istio-ingress-68fc59b496-pvt9t -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:23.684038Z info Command output:
ingress
2018-04-19T22:50:23.684210Z info Running command kubectl logs istio-ingress-68fc59b496-pvt9t -n simple-auth-test-6cc4c1b0ea004dc987e1 -c ingress --kubeconfig=
2018-04-19T22:50:23.892630Z info Running command kubectl logs istio-ingress-68fc59b496-pvt9t -n simple-auth-test-6cc4c1b0ea004dc987e1 -c ingress -p --kubeconfig=
2018-04-19T22:50:24.079246Z info Command error: exit status 1
2018-04-19T22:50:24.079306Z info No previous log command failed: "Error from server (BadRequest): previous terminated container \"ingress\" in pod \"istio-ingress-68fc59b496-pvt9t\" not found\n" exit status 1
2018-04-19T22:50:24.079325Z info Fetching logs on istio-pilot-c9c69f9d-tz2ch
2018-04-19T22:50:24.079335Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 istio-pilot-c9c69f9d-tz2ch -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:24.188383Z info Command output:
discovery istio-proxy
2018-04-19T22:50:24.188460Z info Running command kubectl logs istio-pilot-c9c69f9d-tz2ch -n simple-auth-test-6cc4c1b0ea004dc987e1 -c discovery --kubeconfig=
2018-04-19T22:50:24.364757Z info Command error: exit status 1
2018-04-19T22:50:24.364802Z info Fetching logs on istio-policy-69d989bcfc-s4vzw
2018-04-19T22:50:24.364810Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 istio-policy-69d989bcfc-s4vzw -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:24.472405Z info Command output:
mixer istio-proxy
2018-04-19T22:50:24.472473Z info Running command kubectl logs istio-policy-69d989bcfc-s4vzw -n simple-auth-test-6cc4c1b0ea004dc987e1 -c mixer --kubeconfig=
2018-04-19T22:50:24.655268Z info Running command kubectl logs istio-policy-69d989bcfc-s4vzw -n simple-auth-test-6cc4c1b0ea004dc987e1 -c mixer -p --kubeconfig=
2018-04-19T22:50:24.812379Z info Command error: exit status 1
2018-04-19T22:50:24.812444Z info No previous log command failed: "Error from server (BadRequest): previous terminated container \"mixer\" in pod \"istio-policy-69d989bcfc-s4vzw\" not found\n" exit status 1
2018-04-19T22:50:24.812489Z info Running command kubectl logs istio-policy-69d989bcfc-s4vzw -n simple-auth-test-6cc4c1b0ea004dc987e1 -c istio-proxy --kubeconfig=
2018-04-19T22:50:24.974552Z info Running command kubectl logs istio-policy-69d989bcfc-s4vzw -n simple-auth-test-6cc4c1b0ea004dc987e1 -c istio-proxy -p --kubeconfig=
2018-04-19T22:50:25.138738Z info Fetching logs on istio-statsd-prom-bridge-6dbb7dcc7f-k5kvj
2018-04-19T22:50:25.138761Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 istio-statsd-prom-bridge-6dbb7dcc7f-k5kvj -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:25.249027Z info Command output:
statsd-prom-bridge
2018-04-19T22:50:25.249095Z info Running command kubectl logs istio-statsd-prom-bridge-6dbb7dcc7f-k5kvj -n simple-auth-test-6cc4c1b0ea004dc987e1 -c statsd-prom-bridge --kubeconfig=
2018-04-19T22:50:25.426466Z info Running command kubectl logs istio-statsd-prom-bridge-6dbb7dcc7f-k5kvj -n simple-auth-test-6cc4c1b0ea004dc987e1 -c statsd-prom-bridge -p --kubeconfig=
2018-04-19T22:50:25.633974Z info Command error: exit status 1
2018-04-19T22:50:25.634023Z info No previous log command failed: "Error from server (BadRequest): previous terminated container \"statsd-prom-bridge\" in pod \"istio-statsd-prom-bridge-6dbb7dcc7f-k5kvj\" not found\n" exit status 1
2018-04-19T22:50:25.634035Z info Fetching logs on istio-telemetry-66c58b8d68-5zlgx
2018-04-19T22:50:25.634053Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 istio-telemetry-66c58b8d68-5zlgx -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:25.777687Z info Command output:
mixer istio-proxy
2018-04-19T22:50:25.777758Z info Running command kubectl logs istio-telemetry-66c58b8d68-5zlgx -n simple-auth-test-6cc4c1b0ea004dc987e1 -c mixer --kubeconfig=
2018-04-19T22:50:25.988987Z info Running command kubectl logs istio-telemetry-66c58b8d68-5zlgx -n simple-auth-test-6cc4c1b0ea004dc987e1 -c mixer -p --kubeconfig=
2018-04-19T22:50:26.167675Z info Command error: exit status 1
2018-04-19T22:50:26.167873Z info No previous log command failed: "Error from server (BadRequest): previous terminated container \"mixer\" in pod \"istio-telemetry-66c58b8d68-5zlgx\" not found\n" exit status 1
2018-04-19T22:50:26.168072Z info Running command kubectl logs istio-telemetry-66c58b8d68-5zlgx -n simple-auth-test-6cc4c1b0ea004dc987e1 -c istio-proxy --kubeconfig=
2018-04-19T22:50:26.363584Z info Running command kubectl logs istio-telemetry-66c58b8d68-5zlgx -n simple-auth-test-6cc4c1b0ea004dc987e1 -c istio-proxy -p --kubeconfig=
2018-04-19T22:50:26.533788Z info Fetching logs on prometheus-586d95b8d9-r2cdw
2018-04-19T22:50:26.533814Z info Running command kubectl get pods -n simple-auth-test-6cc4c1b0ea004dc987e1 prometheus-586d95b8d9-r2cdw -o jsonpath={.spec.containers[*].name} --kubeconfig=
2018-04-19T22:50:26.641684Z info Command output:
prometheus
2018-04-19T22:50:26.641872Z info Running command kubectl logs prometheus-586d95b8d9-r2cdw -n simple-auth-test-6cc4c1b0ea004dc987e1 -c prometheus --kubeconfig=
2018-04-19T22:50:26.843336Z info Running command kubectl logs prometheus-586d95b8d9-r2cdw -n simple-auth-test-6cc4c1b0ea004dc987e1 -c prometheus -p --kubeconfig=
2018-04-19T22:50:26.999188Z info Command error: exit status 1
2018-04-19T22:50:26.999229Z info No previous log command failed: "Error from server (BadRequest): previous terminated container \"prometheus\" in pod \"prometheus-586d95b8d9-r2cdw\" not found\n" exit status 1
2018-04-19T22:50:26.999241Z info Fetching deployment info on pod
2018-04-19T22:50:26.999249Z info Running command kubectl get pod -n simple-auth-test-6cc4c1b0ea004dc987e1 -o yaml --kubeconfig=
2018-04-19T22:50:27.133956Z info Fetching deployment info on service
2018-04-19T22:50:27.133984Z info Running command kubectl get service -n simple-auth-test-6cc4c1b0ea004dc987e1 -o yaml --kubeconfig=
2018-04-19T22:50:27.245866Z info Fetching deployment info on ingress
2018-04-19T22:50:27.245894Z info Running command kubectl get ingress -n simple-auth-test-6cc4c1b0ea004dc987e1 -o yaml --kubeconfig=
2018-04-19T22:50:27.375567Z warn Log saving incomplete: 2 errors occurred:
* command failed: "Error from server (BadRequest): container \"citadel\" in pod \"istio-citadel-5f4655c487-gcqgx\" is waiting to start: trying and failing to pull image\n" exit status 1
* command failed: "Error from server (BadRequest): container \"discovery\" in pod \"istio-pilot-c9c69f9d-tz2ch\" is waiting to start: ContainerCreating\n" exit status 1
2018-04-19T22:50:27.375590Z info Starting Cleanup
2018-04-19T22:50:27.375596Z info Cleaning up kubeInfo
2018-04-19T22:50:27.375602Z info Running command kubectl delete namespace simple-auth-test-6cc4c1b0ea004dc987e1 --kubeconfig=
2018-04-19T22:50:27.506679Z info Command output:
namespace "simple-auth-test-6cc4c1b0ea004dc987e1" deleted
2018-04-19T22:50:27.506711Z info Running command kubectl get --kubeconfig= clusterrolebinding -o jsonpath={.items[*].metadata.name}|xargs -n 1|fgrep simple-auth-test-6cc4c1b0ea004dc987e1|xargs kubectl delete --kubeconfig= clusterrolebinding
2018-04-19T22:50:27.823096Z info Command output:
clusterrolebinding "istio-citadel-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrolebinding "istio-ingress-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrolebinding "istio-mixer-admin-role-binding-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrolebinding "istio-pilot-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrolebinding "prometheus-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
2018-04-19T22:50:27.823135Z info Running command kubectl get --kubeconfig= clusterrole -o jsonpath={.items[*].metadata.name}|xargs -n 1|fgrep simple-auth-test-6cc4c1b0ea004dc987e1|xargs kubectl delete --kubeconfig= clusterrole
2018-04-19T22:50:28.150378Z info Command output:
clusterrole "istio-citadel-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrole "istio-ingress-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrole "istio-mixer-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrole "istio-pilot-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
clusterrole "prometheus-simple-auth-test-6cc4c1b0ea004dc987e1" deleted
2018-04-19T22:50:28.150429Z info Deleting namespace simple-auth-test-6cc4c1b0ea004dc987e1
2018-04-19T22:50:34.278161Z info Command error: exit status 1
2018-04-19T22:50:34.328564Z info Command error: exit status 1
2018-04-19T22:50:34.443877Z info Command error: exit status 1
2018-04-19T22:51:08.852562Z info Namespace simple-auth-test-6cc4c1b0ea004dc987e1 deletion status: true
2018-04-19T22:51:08.852608Z info Cleanup complete
FAIL istio.io/istio/tests/e2e/tests/simple 293.996s
tests/istio.mk:89: recipe for target 'e2e_simple_run' failed
make: *** [e2e_simple_run] Error 1

View the nginx IC svc:

# kubectl get svc
NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
<SNIP>
ccp-addons-nginx-ingress-controller        ClusterIP   10.111.217.252   <none>        80/TCP,443/TCP   2d
ccp-addons-nginx-ingress-default-backend   ClusterIP   10.108.208.148   <none>        80/TCP           2d

Check out the details of each nginx ic svc:

root@istio-dev-ma618174f14:~# kubectl get svc ccp-addons-nginx-ingress-controller -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-04-17T21:13:41Z
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: controller
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-controller
  namespace: default
  resourceVersion: "2225"
  selfLink: /api/v1/namespaces/default/services/ccp-addons-nginx-ingress-controller
  uid: 3435c185-4284-11e8-86c2-005056bcada2
spec:
  clusterIP: 10.111.217.252
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: nginx-ingress
    component: controller
    release: ccp-addons
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

root@istio-dev-ma618174f14:~# kubectl get svc ccp-addons-nginx-ingress-default-backend -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-04-17T21:13:41Z
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: default-backend
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-default-backend
  namespace: default
  resourceVersion: "2228"
  selfLink: /api/v1/namespaces/default/services/ccp-addons-nginx-ingress-default-backend
  uid: 346143e6-4284-11e8-86c2-005056bcada2
spec:
  clusterIP: 10.108.208.148
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx-ingress
    component: default-backend
    release: ccp-addons
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
root@istio-dev-ma618174f14:~# 

The details of the nginx ic deployent:

# kubectl get deploy ccp-addons-nginx-ingress-default-backend -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-04-17T21:13:42Z
  generation: 1
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: default-backend
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-default-backend
  namespace: default
  resourceVersion: "2531"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/ccp-addons-nginx-ingress-default-backend
  uid: 34eabd54-4284-11e8-86c2-005056bcada2
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx-ingress
      component: default-backend
      release: ccp-addons
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-ingress
        component: default-backend
        release: ccp-addons
    spec:
      containers:
      - image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/defaultbackend:1.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: nginx-ingress-default-backend
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 60
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-04-17T21:13:43Z
    lastUpdateTime: 2018-04-17T21:13:43Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

nginx ic configmap:

# kubectl get cm ccp-addons-nginx-ingress-controller -o yaml
apiVersion: v1
data:
  enable-vts-status: "false"
  proxy-connect-timeout: "60"
  proxy-read-timeout: "3600"
  proxy-send-timeout: "3600"
kind: ConfigMap
metadata:
  creationTimestamp: 2018-04-17T21:13:37Z
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: controller
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-controller
  namespace: default
  resourceVersion: "2152"
  selfLink: /api/v1/namespaces/default/configmaps/ccp-addons-nginx-ingress-controller
  uid: 31df6407-4284-11e8-86c2-005056bcada2
root@istio-dev-ma618174f14:~# kubectl get cm ingress-controller-leader-nginx -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"ccp-addons-nginx-ingress-controller-qq2kj","leaseDurationSeconds":30,"acquireTime":"2018-04-17T21:13:58Z","renewTime":"2018-04-20T20:28:12Z","leaderTransitions":0}'
  creationTimestamp: 2018-04-17T21:13:58Z
  name: ingress-controller-leader-nginx
  namespace: default
  resourceVersion: "374212"
  selfLink: /api/v1/namespaces/default/configmaps/ingress-controller-leader-nginx
  uid: 3e3dbff8-4284-11e8-86c2-005056bcada2

Check nginx ic pod details:

# kubectl get po
NAME                                                        READY     STATUS    RESTARTS   AGE
ccp-addons-nginx-ingress-controller-fl795                   1/1       Running   0          2d
ccp-addons-nginx-ingress-controller-qq2kj                   1/1       Running   0          2d
ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79   1/1       Running   0          2d
<SNIP>

# kubectl get po ccp-addons-nginx-ingress-controller-qq2kj -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/config: 2aa57cac86dc8ec633d5d2138002ee7831fc79ba00f35c29b09324e6a503d2ab
    cni.projectcalico.org/podIP: 10.51.1.3/32
  creationTimestamp: 2018-04-17T21:13:42Z
  generateName: ccp-addons-nginx-ingress-controller-
  labels:
    app: nginx-ingress
    component: controller
    controller-revision-hash: "2789669043"
    pod-template-generation: "1"
    release: ccp-addons
  name: ccp-addons-nginx-ingress-controller-qq2kj
  namespace: default
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: ccp-addons-nginx-ingress-controller
    uid: 34a00cf0-4284-11e8-86c2-005056bcada2
  resourceVersion: "2532"
  selfLink: /api/v1/namespaces/default/pods/ccp-addons-nginx-ingress-controller-qq2kj
  uid: 34ec5035-4284-11e8-86c2-005056bcada2
spec:
  containers:
  - args:
    - /nginx-ingress-controller
    - --default-backend-service=default/ccp-addons-nginx-ingress-default-backend
    - --election-id=ingress-controller-leader
    - --ingress-class=nginx
    - --configmap=default/ccp-addons-nginx-ingress-controller
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 10254
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: nginx-ingress-controller
    ports:
    - containerPort: 80
      name: http
      protocol: TCP
    - containerPort: 443
      name: https
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 10254
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: ccp-addons-nginx-ingress-token-zv6ch
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: istio-dev-waa1fbf3da6
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: ccp-addons-nginx-ingress
  serviceAccountName: ccp-addons-nginx-ingress
  terminationGracePeriodSeconds: 60
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - name: ccp-addons-nginx-ingress-token-zv6ch
    secret:
      defaultMode: 420
      secretName: ccp-addons-nginx-ingress-token-zv6ch
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-04-17T21:13:42Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-04-17T21:14:04Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-04-17T21:13:50Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://ad077a530206f45b37f193e685494574aa048b2107bf442aa7042057152d3253
    image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15
    imageID: docker-pullable://registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/nginx-ingress-controller@sha256:884968ca9ea71eb566b769ef3773a9869da44feb1dd006693e31f27d785cf2f1
    lastState: {}
    name: nginx-ingress-controller
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-04-17T21:13:50Z
  hostIP: 10.1.1.114
  phase: Running
  podIP: 10.51.1.3
  qosClass: BestEffort
  startTime: 2018-04-17T21:13:42Z

Check nginx def be pod details:

# kubectl get po
NAME                                                        READY     STATUS    RESTARTS   AGE
ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79   1/1       Running   0          2d
<SNIP>

# kubectl get po ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.51.2.5/32
  creationTimestamp: 2018-04-17T21:13:42Z
  generateName: ccp-addons-nginx-ingress-default-backend-64975648dd-
  labels:
    app: nginx-ingress
    component: default-backend
    pod-template-hash: "2053120488"
    release: ccp-addons
  name: ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79
  namespace: default
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: ccp-addons-nginx-ingress-default-backend-64975648dd
    uid: 34f46c49-4284-11e8-86c2-005056bcada2
  resourceVersion: "2518"
  selfLink: /api/v1/namespaces/default/pods/ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79
  uid: 3507d5a8-4284-11e8-86c2-005056bcada2
spec:
  containers:
  - image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/defaultbackend:1.3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 8080
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: nginx-ingress-default-backend
    ports:
    - containerPort: 8080
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kknjg
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: istio-dev-wfe429a0861
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 60
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-kknjg
    secret:
      defaultMode: 420
      secretName: default-token-kknjg
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-04-17T21:13:43Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-04-17T21:14:01Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-04-17T21:13:43Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://ff9795ce7e0ad54e743167aa0c02da3b6944732fcfa2c16867dccaa92b5a9109
    image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/defaultbackend:1.3
    imageID: docker-pullable://registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/defaultbackend@sha256:5e635236018ac20db4a64f4b4a89b0ceea7867fea372984c01477b1522deea8b
    lastState: {}
    name: nginx-ingress-default-backend
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-04-17T21:13:58Z
  hostIP: 10.1.1.115
  phase: Running
  podIP: 10.51.2.5
  qosClass: BestEffort
  startTime: 2018-04-17T21:13:43Z

My CCP NGINX Ingress Controller and Default Backend status:

# kubectl get deploy
NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ccp-addons-nginx-ingress-default-backend   1         1         1            1           3d
<SNIP>

# kubectl get svc
NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
ccp-addons-nginx-ingress-controller        ClusterIP   10.111.217.252   <none>        80/TCP,443/TCP   3d
ccp-addons-nginx-ingress-default-backend   ClusterIP   10.108.208.148   <none>        80/TCP           3d
<SNIP>

# kubectl get po
NAME                                                        READY     STATUS    RESTARTS   AGE
ccp-addons-nginx-ingress-controller-fl795                   1/1       Running   0          3d
ccp-addons-nginx-ingress-controller-qq2kj                   1/1       Running   0          3d
ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79   1/1       Running   0          3d
<SNIP>

I can curl the nginx ingress controller default backends:

# curl -I 10.111.217.252:80/healthz
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Fri, 20 Apr 2018 21:45:33 GMT
Content-Type: text/html
Content-Length: 0
Connection: keep-alive
Strict-Transport-Security: max-age=15724800; includeSubDomains;

# curl -I 10.111.217.252:80
HTTP/1.1 404 Not Found
Server: nginx/1.13.5
Date: Fri, 20 Apr 2018 21:45:41 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 21
Connection: keep-alive
Strict-Transport-Security: max-age=15724800; includeSubDomains;

My httpbin deployment that deploys the httpbin service and pod:

# cat istio-0.7.0/samples/httpbin/httpbin.yaml 
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 8000
  selector:
    app: httpbin
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/citizenstig/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 8000

My httpbin ing spec:

root@istio-dev-ma618174f14:~# cat httpbin-ing.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - http:
      paths:
      - path: /status/.*
        backend:
          serviceName: httpbin
          servicePort: 8000
      - path: /delay/.*
        backend:
          serviceName: httpbin
          servicePort: 8000

The status of my httpbin service, pod and ingress:

$ kubectl get svc
NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
<SNIP>
httpbin                                    ClusterIP   10.106.46.199    <none>        8000/TCP         29m

$ kubectl get po
NAME                                                        READY     STATUS    RESTARTS   AGE
<SNIP>
httpbin-99bf57b99-kwvq8                                     1/1       Running   0          30m

$ kubectl get ing -o wide
NAME             HOSTS     ADDRESS                 PORTS     AGE
simple-ingress   *         10.1.1.114,10.1.1.115   80        28m

Note: The Ingress addresses (10.1.1.114,10.1.1.115) are Node IP's of the 2 CCP worker nodes.

I can curl the httpbin endpoints from my CCP Master Node within the cluster using the ClusterIP:

$ curl -I 10.106.46.199:8000/status/404
HTTP/1.1 404 NOT FOUND
Server: gunicorn/19.6.0
Date: Fri, 20 Apr 2018 21:49:23 GMT
Connection: close
Content-Type: text/html; charset=utf-8
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Content-Length: 0

ccpuser@istio-dev-ma618174f14:~$ curl -I 10.106.46.199:8000/headers
HTTP/1.1 200 OK
Server: gunicorn/19.6.0
Date: Fri, 20 Apr 2018 21:49:30 GMT
Connection: close
Content-Type: application/json
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Content-Length: 112

I exit the CCP Master Node to my jumphost. I can ping the Ingress Addresses (i.e. Kubernetes Worker Nodes ExternalIP's), but I can not curl the httpbin service:

root@ssh-jump:~# ping 10.1.1.114
PING 10.1.1.114 (10.1.1.114) 56(84) bytes of data.
64 bytes from 10.1.1.114: icmp_seq=1 ttl=64 time=0.259 ms
64 bytes from 10.1.1.114: icmp_seq=2 ttl=64 time=0.342 ms
^C
--- 10.1.1.114 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.259/0.300/0.342/0.044 ms

root@ssh-jump:~# ping 10.1.1.115
PING 10.1.1.115 (10.1.1.115) 56(84) bytes of data.
64 bytes from 10.1.1.115: icmp_seq=1 ttl=64 time=0.228 ms
64 bytes from 10.1.1.115: icmp_seq=2 ttl=64 time=0.301 ms
^C
--- 10.1.1.115 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.228/0.264/0.301/0.039 ms

root@ssh-jump:~# curl -I 10.1.1.114:80/status/200
curl: (7) Failed to connect to 10.1.1.114 port 80: Connection refused
root@ssh-jump:~# curl -I 10.1.1.114:80/headers
curl: (7) Failed to connect to 10.1.1.114 port 80: Connection refused
root@ssh-jump:~# curl -I 10.1.1.115:80/status/200
curl: (7) Failed to connect to 10.1.1.115 port 80: Connection refused
root@ssh-jump:~# curl -I 10.1.1.115:80/headers
curl: (7) Failed to connect to 10.1.1.115 port 80: Connection refused

I removed the httpbin deployment and ingress. I updated the httpbin svc to use type nodePort:

<SNIP>
spec:
  type: NodePort
  ports:
  - name: http
    port: 8000
    nodePort: 32000
<SNIP

The following guide provides instructions for testing an Ingress on CCP.

CCP NGINX Ingress Controller DaemonSet Spec:

# kubectl get ds ccp-addons-nginx-ingress-controller -o yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"app":"nginx-ingress","chart":"nginx-ingress-0.8.26","component":"controller","heritage":"Tiller","release":"ccp-addons"},"name":"ccp-addons-nginx-ingress-controller","namespace":"default"},"spec":{"selector":{"matchLabels":{"app":"nginx-ingress","component":"controller","release":"ccp-addons"}},"template":{"metadata":{"labels":{"app":"nginx-ingress","component":"controller","release":"ccp-addons"}},"spec":{"containers":[{"args":["/nginx-ingress-controller","--default-backend-service=default/ccp-addons-nginx-ingress-default-backend","--election-id=ingress-controller-leader","--ingress-class=nginx","--configmap=default/ccp-addons-nginx-ingress-controller"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}}],"image":"registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"nginx-ingress-controller","ports":[{"containerPort":80,"hostPort":80,"name":"http","protocol":"TCP"},{"containerPort":443,"hostPort":443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1}}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","serviceAccount":"ccp-addons-nginx-ingress","serviceAccountName":"ccp-addons-nginx-ingress","terminationGracePeriodSeconds":60}},"updateStrategy":{"type":"OnDelete"}}}
  creationTimestamp: 2018-04-23T16:52:04Z
  generation: 1
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: controller
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-controller
  namespace: default
  resourceVersion: "727375"
  selfLink: /apis/extensions/v1beta1/namespaces/default/daemonsets/ccp-addons-nginx-ingress-controller
  uid: a684166e-4716-11e8-86c2-005056bcada2
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx-ingress
      component: controller
      release: ccp-addons
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-ingress
        component: controller
        release: ccp-addons
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=default/ccp-addons-nginx-ingress-default-backend
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx
        - --configmap=default/ccp-addons-nginx-ingress-controller
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/nginx-ingress-controller:0.9.0-beta.15
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: nginx-ingress-controller
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ccp-addons-nginx-ingress
      serviceAccountName: ccp-addons-nginx-ingress
      terminationGracePeriodSeconds: 60
  templateGeneration: 1
  updateStrategy:
    type: OnDelete
status:
  currentNumberScheduled: 2
  desiredNumberScheduled: 2
  numberAvailable: 2
  numberMisscheduled: 0
  numberReady: 2
  observedGeneration: 1
  updatedNumberScheduled: 2

hostPort is needed to expose the NGINX Ingress Controller on each host in the DaemonSet. Without hotPort, the following error is observed when trying to curl an Ingress from outside the cluster:

# curl http://echo.example.com 
curl: (7) Failed to connect to echo.example.com port 80: Connection refused

Add hostPort: 80 under containerPort: 80 to the default CCP NGINX Ingress Controller DaemonSet spec for Ingress to work. I expect hostPort: 443 is needed under containerPort: 443` for TLS Ingresses:

<SNIP>
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
          hostPort: 80
        - containerPort: 443
          name: https
          protocol: TCP
          hostPort: 443
<SNIP>

NGINX Ingress Controller Default Backend Deployment spec:

# kubectl get deploy ccp-addons-nginx-ingress-default-backend -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-04-17T21:13:42Z
  generation: 1
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: default-backend
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-default-backend
  namespace: default
  resourceVersion: "2531"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/ccp-addons-nginx-ingress-default-backend
  uid: 34eabd54-4284-11e8-86c2-005056bcada2
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx-ingress
      component: default-backend
      release: ccp-addons
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-ingress
        component: default-backend
        release: ccp-addons
    spec:
      containers:
      - image: registry.ci.ciscolabs.com/cpsg_ccp-charts/k8s.gcr.io/defaultbackend:1.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: nginx-ingress-default-backend
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 60
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-04-17T21:13:43Z
    lastUpdateTime: 2018-04-17T21:13:43Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Running pods related to the NGINX Ingress Controller/Default Backend:

# kubectl get po
NAME                                                        READY     STATUS    RESTARTS   AGE
ccp-addons-nginx-ingress-controller-npgft                   1/1       Running   0          2h
ccp-addons-nginx-ingress-controller-pvhtl                   1/1       Running   0          2h
ccp-addons-nginx-ingress-default-backend-64975648dd-h6v79   1/1       Running   0          5d
<SNIP>

NGINX Ingress Controller Default Backend Svc spec:

# kubectl get svc ccp-addons-nginx-ingress-default-backend -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-04-17T21:13:41Z
  labels:
    app: nginx-ingress
    chart: nginx-ingress-0.8.26
    component: default-backend
    heritage: Tiller
    release: ccp-addons
  name: ccp-addons-nginx-ingress-default-backend
  namespace: default
  resourceVersion: "2228"
  selfLink: /api/v1/namespaces/default/services/ccp-addons-nginx-ingress-default-backend
  uid: 346143e6-4284-11e8-86c2-005056bcada2
spec:
  clusterIP: 10.108.208.148
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx-ingress
    component: default-backend
    release: ccp-addons
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Echoheaders app Deployment spec:

# kubectl get deploy echoheaders -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-04-23T17:59:51Z
  generation: 1
  labels:
    run: echoheaders
  name: echoheaders
  namespace: default
  resourceVersion: "733455"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/echoheaders
  uid: 1e7d8b4c-4720-11e8-86c2-005056bcada2
spec:
  replicas: 1
  selector:
    matchLabels:
      run: echoheaders
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: echoheaders
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.4
        imagePullPolicy: IfNotPresent
        name: echoheaders
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-04-23T17:59:51Z
    lastUpdateTime: 2018-04-23T17:59:51Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Test App (Echoheaders) Svc spec:

# kubectl get svc echoheaders -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-04-23T18:00:17Z
  labels:
    run: echoheaders
  name: echoheaders
  namespace: default
  resourceVersion: "733490"
  selfLink: /api/v1/namespaces/default/services/echoheaders
  uid: 2e308453-4720-11e8-86c2-005056bcada2
spec:
  clusterIP: 10.102.178.200
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: echoheaders
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

The Echoheaders Ingress spec. Note: A host must be specified in the spec for the Ingress to work. Otherwise, the NGINX Ingress Controller will route traffic to the default backend svc:

kubectl get ing -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"echoheaders","namespace":"default"},"spec":{"rules":[{"host":"echo.example.com","http":{"paths":[{"backend":{"serviceName":"echoheaders","servicePort":80},"path":"/"}]}}]}}
    creationTimestamp: 2018-04-23T19:14:23Z
    generation: 1
    name: echoheaders
    namespace: default
    resourceVersion: "740013"
    selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/echoheaders
    uid: 88626d9a-472a-11e8-86c2-005056bcada2
  spec:
    rules:
    - host: echo.example.com
      http:
        paths:
        - backend:
            serviceName: echoheaders
            servicePort: 80
          path: /
  status:
    loadBalancer:
      ingress:
      - ip: 10.1.1.114
      - ip: 10.1.1.115
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

It can take several seconds for Kubernetes to map the addresses to the Ingress. During this time, the ADDRESS column will be empty and the Ingress will be unsable:

# kubectl get ing
NAME          HOSTS                 ADDRESS   PORTS     AGE
echoheaders   echo.example.com                80        29s

Check the Ingress again after waiting and you should see the ADDRESS column populated with the node addresses where the NGINX Ingress Controller is running:

# kubectl get ing -o wide
NAME          HOSTS                 ADDRESS                 PORTS     AGE
echoheaders   echo.example.com   10.1.1.114,10.1.1.115      80        2m

Describe the Ingress to see the mapping of ingress address<>backend host<>backend(i.e. k8s svc):

# kubectl describe ing/httpbin
Name:             echoheaders
Namespace:        default
Address:          10.1.1.114,10.1.1.115
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  echo.example.com  
                       /status/.*   echoheaders:80 (<none>)
                       /delay/.*    echoheaders:80 (<none>)
Annotations:
Events:
  Type    Reason  Age   From                Message
  ----    ------  ----  ----                -------
  Normal  CREATE  5m    ingress-controller  Ingress default/echoheaders
  Normal  CREATE  5m    ingress-controller  Ingress default/echoheaders
  Normal  UPDATE  4m    ingress-controller  Ingress default/echoheaders
  Normal  UPDATE  4m    ingress-controller  Ingress default/echoheaders

Create a name<>ip mapping in /etc/hosts (i.e. 10.1.1.114 echo.example.com) for the test ingress echo.example.com or set the Host header in your curl command:

# curl http://10.1.1.114 -H 'Host: echo.example.com'
CLIENT VALUES:
client_address=10.51.1.37
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://echo.example.com:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
connection=close
host=echo.example.com
user-agent=curl/7.47.0
x-forwarded-for=10.1.1.113
x-forwarded-host=echo.example.com
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=10.1.1.113
x-scheme=http
BODY:

Note: The Echoheaders app is listening on the root http path (i.e. /). If your app is listening on another path, you will need to rewrite the header target. Here is an example of the Istio httpbin app that exposes the /status path:

# kubectl get ing -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      ingress.kubernetes.io/rewrite-target: /status
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/status"},"name":"httpbin","namespace":"default"},"spec":{"rules":[{"host":"node114.example.com","http":{"paths":[{"backend":{"serviceName":"httpbin","servicePort":8000},"path":"/status/.*"}]}}]}}
    creationTimestamp: 2018-04-23T20:55:19Z
    generation: 1
    name: httpbin
    namespace: default
    resourceVersion: "748781"
    selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/httpbin
    uid: a1cb54ce-4738-11e8-86c2-005056bcada2
  spec:
    rules:
    - host: echo.example.com
      http:
        paths:
        - backend:
            serviceName: httpbin
            servicePort: 8000
          path: /status/.*
  status:
    loadBalancer:
      ingress:
      - ip: 10.1.1.114
      - ip: 10.1.1.115
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Note: The httpbin app needs to be be deployed according to this doc to follow along.

Verify that the expected http path is accessible through the Ingress:

# curl -I http://10.1.1.114/status/200 -H "Host: echo.example.com"
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Mon, 23 Apr 2018 20:56:57 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

This is an example of specifying multiple hosts with different paths in a single Ingress:

# kubectl get ing -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      ingress.kubernetes.io/rewrite-target: /delay
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/delay"},"name":"httpbin","namespace":"default"},"spec":{"rules":[{"host":"node114.example.com","http":{"paths":[{"backend":{"serviceName":"httpbin","servicePort":8000},"path":"/status/.*"}]}},{"host":"node115.example.com","http":{"paths":[{"backend":{"serviceName":"httpbin","servicePort":8000},"path":"/delay/.*"}]}}]}}
    creationTimestamp: 2018-04-23T21:21:18Z
    generation: 1
    name: httpbin
    namespace: default
    resourceVersion: "751084"
    selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/httpbin
    uid: 4350f0a1-473c-11e8-86c2-005056bcada2
  spec:
    rules:
    - host: echo.example.com
      http:
        paths:
        - backend:
            serviceName: httpbin
            servicePort: 8000
          path: /status/.*
    - host: echo2.example.com
      http:
        paths:
        - backend:
            serviceName: httpbin
            servicePort: 8000
          path: /delay/.*
  status:
    loadBalancer:
      ingress:
      - ip: 10.1.1.114
      - ip: 10.1.1.115
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Verify that the expected http paths are accessible through the Ingress:

# curl -I http://10.1.1.114/status/200 -H "Host: echo.example.com"
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Mon, 23 Apr 2018 21:23:01 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

root@istio-dev-ma618174f14:~# curl -I http://10.1.1.115/delay/2 -H "Host: echo2.example.com"
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Mon, 23 Apr 2018 21:23:18 GMT
Content-Type: application/json
Content-Length: 385
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

Kubernetes supports running multiple Ingress Controllers. The kubernetes.io/ingress.class annotation is used within an Ingress spec to specify which Ingress Controller to use. nginx for the NGINX Ingress Controller and istio for the Istio Ingress. By default, the Istio sample bookinfo application defines this annotation.

Update the existing echoserver ingress with the kubernetes.io/ingress.class: nginx annotation and redeploy the Ingress. Test access to the Istio bookinfo ingress and the nginx httbin ingress:

# curl -I http://10.1.1.114/status/200 -H "Host: node114.example.com"
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Tue, 24 Apr 2018 01:09:28 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

# curl -I http://10.1.1.115/delay/2 -H "Host: node115.example.com"
HTTP/1.1 200 OK
Server: nginx/1.13.5
Date: Tue, 24 Apr 2018 01:09:33 GMT
Content-Type: application/json
Content-Length: 385
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

# curl -I http://10.1.1.114:32000/productpage
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 5723
server: envoy
date: Tue, 24 Apr 2018 01:09:48 GMT
x-envoy-upstream-service-time: 1347

# curl -I http://10.1.1.115:32000/productpage
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 5719
server: envoy
date: Tue, 24 Apr 2018 01:09:59 GMT
x-envoy-upstream-service-time: 72
@vhosakot
Copy link

Hi Daneyon,

I saw the same errors below in istio master when I ran make e2e_simple E2E_ARGS="--auth_enable --use_local_cluster --istioctl /usr/local/bin/istioctl":

error	Failed to deploy Istio.
error	Failed to complete Init. Error context deadline exceeded

Using the release-0.8 git branch and 0.8.0 as the TAG resolved it for me:

git clone https://github.com/istio/istio.git
git checkout release-0.8
export TAG=0.8.0
make e2e_simple E2E_ARGS="--auth_enable --use_local_cluster --istioctl /usr/local/bin/istioctl"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment