Successful curl from busybox-curl
pod:
[2019-01-08T18:00:46.389Z] "GET /html HTTP/1.1" 200 - 0 3741 15 15 "-" "curl/7.30.0" "357135f7-82a5-4c96-bb6e-cfa7a029de01" "httpbin.default.global:8000" "198.51.101.2:15443" outbound|8000||httpbin.default.global - 127.255.0.2:8000 127.0.0.1:41540 -
Failed curl from sleep
pod:
[2019-01-08T17:59:01.849Z] "GET /html HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.60.0" "d38eb3dc-c7ae-4768-92ff-31058ec376a3" "httpbin.default.global:8000" "-" - - 127.255.0.2:8000 127.0.0.1:39018 -
RDS is not SYNCED for the sleep proxy:
$ istioctl proxy-status
NAME CDS LDS EDS RDS PILOT VERSION
busybox-curl-78cf4d5bcc-mxbf8.default SYNCED SYNCED SYNCED (50%) SYNCED istio-pilot-68658b6cd4-lg6nv 1.1.0
<SNIP>
sleep-57f9d6fd6b-gm4bw.default SYNCED SYNCED SYNCED (50%) NOT SENT istio-pilot-68658b6cd4-lg6nv 1.1.0
I am attempting to curl the httpbin app in cluster2 from the sleep app in cluster1, but I receive the following error message:
$ kubectl exec -it $SLEEP_POD1 -c sleep curl http://httpbin.default.global:8000/html
curl: (52) Empty reply from server
command terminated with exit code 52
httpbin.default.global
resolves in cluster1.
$ kubectl exec -ti busybox -- nslookup httpbin.default.global
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: httpbin.default.global
Address 1: 127.255.0.2
istio-coredns logs in cluster1:
$ kubectl logs istiocoredns-656f4ccc7f-rbxnf -c istio-coredns-plugin -n istio-system
<SNIP>
2019-01-03T23:19:04.542306Z info Have 1 service entries
2019-01-03T23:19:04.542372Z info adding DNS mapping: httpbin.default.global.->[127.255.0.2]
I see the same citadel pod error messages in cluster1 and cluster2:
<SNIP>
2019-01-03T22:11:50.093846Z error istio.io/istio/security/pkg/k8s/controller/workloadsecret.go:186: Failed to list *v1.ServiceAccount: Get https://10.96.0.1:443/api/v1/serviceaccounts?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
2019-01-03T22:11:50.093977Z error istio.io/istio/security/pkg/k8s/controller/workloadsecret.go:180: Failed to list *v1.Secret: Get https://10.96.0.1:443/api/v1/secrets?fieldSelector=type%3Distio.io%2Fkey-and-cert&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
2019-01-03T22:32:13.649767Z error istio.io/istio/security/pkg/registry/kube/service.go:76: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
2019-01-03T22:32:23.926494Z error istio.io/istio/security/pkg/k8s/controller/workloadsecret.go:186: Failed to list *v1.ServiceAccount: Get https://10.96.0.1:443/api/v1/serviceaccounts?limit=500&resourceVersion=0: net/http: TLS handshake timeout
2019-01-03T22:32:23.926690Z error istio.io/istio/security/pkg/registry/kube/serviceaccount.go:74: Failed to list *v1.ServiceAccount: Get https://10.96.0.1:443/api/v1/serviceaccounts?limit=500&resourceVersion=0: net/http: TLS handshake timeout
2019-01-03T22:32:23.975584Z error istio.io/istio/security/pkg/k8s/controller/workloadsecret.go:180: Failed to list *v1.Secret: Get https://10.96.0.1:443/api/v1/secrets?fieldSelector=type%3Distio.io%2Fkey-and-cert&limit=500&resourceVersion=0: net/http: TLS handshake timeout
The sample root certs are used by citadel in each cluster:
$ kubectl get secret/cacerts -n istio-system
NAME TYPE DATA AGE
cacerts Opaque 4 19h
$ kubectl get po/istio-citadel-66bdf86f59-l6x4b -n istio-system -o yaml
<SNIP>
containers:
- args:
- --append-dns-names=true
- --grpc-port=8060
- --grpc-hostname=citadel
- --citadel-storage-namespace=istio-system
- --custom-dns-names=istio-pilot-service-account.istio-system:istio-pilot.istio-system
- --self-signed-ca=false
- --signing-cert=/etc/cacerts/ca-cert.pem
- --signing-key=/etc/cacerts/ca-key.pem
- --root-cert=/etc/cacerts/root-cert.pem
- --cert-chain=/etc/cacerts/cert-chain.pem
- --trust-domain=cluster.local
image: docker.io/istio/citadel:1.1.0-snapshot.4
imagePullPolicy: IfNotPresent
name: citadel
resources:
requests:
cpu: 10m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/cacerts
name: cacerts
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: istio-citadel-service-account-token-sbcwn
readOnly: true
dnsPolicy: ClusterFirst
<SNIP>
volumes:
- name: cacerts
secret:
defaultMode: 420
optional: true
secretName: cacerts
- name: istio-citadel-service-account-token-sbcwn
secret:
defaultMode: 420
secretName: istio-citadel-service-account-token-sbcwn
<SNIP>
The httpbin.default.global
clusters exist on the sleep proxy pod of cluster1:
$ istioctl proxy-config cluster sleep-57f9d6fd6b-2jv74.default | grep httpbin
httpbin.default.global 8000 - outbound STRICT_DNS
httpbin.default.global 9999 - outbound STRICT_DNS
<SNIP>
The httpbin.default.global
endpoints exist on the sleep proxy pod of cluster1:
$ istioctl proxy-config endpoints sleep-57f9d6fd6b-2jv74 | grep httpbin
172.17.0.3:15443 HEALTHY outbound|8000||httpbin.default.global
172.17.0.3:15443 HEALTHY outbound|9999||httpbin.default.global
The cluster2 ingress-gateway does not see the httpbin.default.global
cluster:
$ istioctl proxy-config cluster istio-ingressgateway-6c69b488db-9nj8h -n istio-system | grep httpbin
httpbin.default.svc.cluster.local 8000 - outbound EDS
outbound_.8000_._.httpbin.default.svc.cluster.local - - - EDS
The cluster1 serviceEntry for httpbin running in cluster2. 127.255.0.2
is unused and 172.17.0.3
is the node IP of cluster2.:
$ kubectl get serviceentries -o yaml
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
creationTimestamp: 2019-01-03T19:57:15Z
generation: 1
name: httpbin-default
namespace: default
resourceVersion: "22978"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/serviceentries/httpbin-default
uid: c4895b3c-0f91-11e9-832b-02425af88c99
spec:
addresses:
- 127.255.0.2
endpoints:
- address: 172.17.0.3
ports:
http1: 15443
tcp2: 15443
hosts:
- httpbin.default.global
location: MESH_INTERNAL
ports:
- name: http1
number: 8000
protocol: http
- name: tcp2
number: 9999
protocol: tcp
resolution: DNS
kind: List
metadata:
resourceVersion: ""
selfLink: ""
cluster2 node:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-cluster2-control-plane Ready master 20h v1.12.3 172.17.0.3 <none> Ubuntu 18.04.1 LTS 4.9.93-linuxkit-aufs docker://18.6.1
cluster2 istio control-plane pods and services. Note: kind does not support cloud load balancers so the ingress-gateway
is accessed through the node's IP:
$ kubectl get po,svc -n istio-system
NAME READY STATUS RESTARTS AGE
pod/istio-citadel-66bdf86f59-qqjkl 1/1 Running 0 6h9m
pod/istio-cleanup-secrets-1.1.0-snapshot.4-9kwhh 0/1 Completed 0 6h10m
pod/istio-egressgateway-647d7c76c7-sld8q 1/1 Running 0 6h10m
pod/istio-galley-57b575bb97-6r7lb 1/1 Running 0 6h10m
pod/istio-ingressgateway-6c69b488db-9nj8h 1/1 Running 0 6h10m
pod/istio-pilot-68658b6cd4-l728j 2/2 Running 0 6h9m
pod/istio-policy-67d8cbf8df-jmxjt 2/2 Running 4 6h9m
pod/istio-security-post-install-1.1.0-snapshot.4-96v7w 0/1 Completed 0 6h10m
pod/istio-sidecar-injector-799c898f8d-zlppw 1/1 Running 0 6h9m
pod/istio-telemetry-57b9fbbb76-n48hs 2/2 Running 4 6h9m
pod/istiocoredns-656f4ccc7f-7pjj6 2/2 Running 0 6h10m
pod/prometheus-7fc796f469-wvkwc 1/1 Running 0 6h9m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/istio-citadel ClusterIP 10.98.115.221 <none> 8060/TCP,9093/TCP 6h10m
service/istio-egressgateway ClusterIP 10.100.36.179 <none> 80/TCP,443/TCP,15443/TCP 6h10m
service/istio-galley ClusterIP 10.109.136.160 <none> 443/TCP,9093/TCP,9901/TCP 6h10m
service/istio-ingressgateway LoadBalancer 10.102.175.111 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31661/TCP,15030:30685/TCP,15031:32334/TCP,15032:31324/TCP,15443:30730/TCP 6h10m
service/istio-pilot ClusterIP 10.100.249.246 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 6h10m
service/istio-policy ClusterIP 10.111.90.82 <none> 9091/TCP,15004/TCP,9093/TCP 6h10m
service/istio-sidecar-injector ClusterIP 10.111.169.106 <none> 443/TCP 6h10m
service/istio-telemetry ClusterIP 10.104.201.204 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 6h10m
service/istiocoredns ClusterIP 10.98.180.29 <none> 53/UDP,53/TCP 6h10m
service/prometheus ClusterIP 10.102.30.195 <none> 9090/TCP 6h10m
httpbin app details:
$ kubectl get svc,po,ep | grep httpbin
service/httpbin ClusterIP 10.111.161.218 <none> 8000/TCP 3h52m
pod/httpbin-5fc7cf895d-5ld2r 2/2 Running 0 3h52m
endpoints/httpbin 10.32.0.14:80 3h52m
Sleep app details:
$ kubectl get svc,po,ep | grep sleep
service/sleep ClusterIP 10.100.49.21 <none> 80/TCP 5h3m
pod/sleep-57f9d6fd6b-2jv74 2/2 Running 0 5h3m
endpoints/sleep 10.32.0.14:80 5h3m