Skip to content

Instantly share code, notes, and snippets.

@danehans
Last active August 9, 2018 18:09
Show Gist options
  • Save danehans/e5f2749b59a68e4170482974845fad02 to your computer and use it in GitHub Desktop.
Save danehans/e5f2749b59a68e4170482974845fad02 to your computer and use it in GitHub Desktop.
CT1631: SEC-509-VALIDATE-2: Validate X.509 certificates

Introduction

Istio consists of several services that intercommunicate to form the service mesh control-plane. The Istio data-plane consists of a proxy mesh, where a proxy is deployed as a sidecar with each end-user application service. Mutual TLS (mTLS) can be used to secure end-user services running within the mesh and for securing Istio control-plane communication.

This document provides instructions for testing x.509 certificate validation performed by Istio proxies. The httpbin and sleep applications are used to perform certificate validation testing.

Prerequisites

  1. Access to a v1.9 or newer Kubernetes cluster for running Istio.
  2. A host with openssl, kubectl, helm and a credential file for accessing the Kubernetes cluster where Istio will run.
  3. Install Istio v1.0 using Helm.
  4. Customize the installation to enable control-plane mTLS by setting the following Helm parameter:
    --set global.controlPlaneSecurityEnabled=true
    
  5. Customize the installation to enable data-plane mTLS by setting the following Helm parameter:
    --set global.mtls.enabled=true
    
  6. Deploy the sample applications used for testing:
    $ kubectl apply -f samples/sleep/sleep.yaml
    $ kubectl apply -f samples/httpbin/httpbin.yaml
    

You can verify mTLS is enabled by viewing the istio configmap details:

$ kubectl get cm/istio -n istio-system -o yaml | grep MUTUAL_TLS
    authPolicy: MUTUAL_TLS
      controlPlaneAuthPolicy: MUTUAL_TLS

Istio X.509 Certificate Validation

To verify proxy connection failure when using expired certificates, update the default Citadel deployment to include:

--workload-cert-ttl=3m
--max-workload-cert-ttl=3m

Update the Citadel deployment so the Citadel pod runs using the new workload-cert-ttl and max-workload-cert-ttl flags.

Delete the existing secret containing the certificates used by application pods:

$ kubectl delete secret/istio.default 

Citadel will create a new secret containing a certificate chain with a 3 minutes validation period (default 90 days). You can verify this with the following:

$ kubectl exec httpbin-77647f7b59-ttbx7 -c istio-proxy curl http://127.0.0.1:15000/certs
{
  "ca_cert": "Certificate Path: /etc/certs/root-cert.pem, Serial Number: 426f4b1a053839cbf9f7ef7409591a86, Days until Expiration: 363",
  "cert_chain": "Certificate Path: /etc/certs/cert-chain.pem, Serial Number: 3ff8dae76f834f2d1f4c746ede1a0a5a, Days until Expiration: 0"
}

Now that the proxies are using a short-lived certificate, remove the Citadel deployment so the proxies are unable to receive a new certificate. This will cause the proxies to use an expired certificate when performing a TLS handshake:

$ kubectl delete deploy/istio-citadel -n istio-system

Verify that no connection attempts have been made by the httpbin proxy:

$ kubectl exec httpbin-77647f7b59-ttbx7 -c istio-proxy curl http://127.0.0.1:15000/stats
<SNIP>
http.192.168.1.186_8000.downstream_cx_active: 0
http.192.168.1.186_8000.downstream_cx_destroy: 0
http.192.168.1.186_8000.downstream_cx_destroy_active_rq: 0
http.192.168.1.186_8000.downstream_cx_destroy_local: 0
http.192.168.1.186_8000.downstream_cx_destroy_local_active_rq: 0
http.192.168.1.186_8000.downstream_cx_destroy_remote: 0
http.192.168.1.186_8000.downstream_cx_destroy_remote_active_rq: 0
http.192.168.1.186_8000.downstream_cx_drain_close: 0
http.192.168.1.186_8000.downstream_cx_http1_active: 0
http.192.168.1.186_8000.downstream_cx_http1_total: 0
http.192.168.1.186_8000.downstream_cx_http2_active: 0
http.192.168.1.186_8000.downstream_cx_http2_total: 0
http.192.168.1.186_8000.downstream_cx_idle_timeout: 0
http.192.168.1.186_8000.downstream_cx_protocol_error: 0
http.192.168.1.186_8000.downstream_cx_rx_bytes_buffered: 0
http.192.168.1.186_8000.downstream_cx_rx_bytes_total: 0
http.192.168.1.186_8000.downstream_cx_ssl_active: 0
http.192.168.1.186_8000.downstream_cx_ssl_total: 0
http.192.168.1.186_8000.downstream_cx_total: 0
http.192.168.1.186_8000.downstream_cx_tx_bytes_buffered: 0
http.192.168.1.186_8000.downstream_cx_tx_bytes_total: 0
http.192.168.1.186_8000.downstream_cx_websocket_active: 0
http.192.168.1.186_8000.downstream_cx_websocket_total: 0
http.192.168.1.186_8000.downstream_flow_control_paused_reading_total: 0
http.192.168.1.186_8000.downstream_flow_control_resumed_reading_total: 0
http.192.168.1.186_8000.downstream_rq_1xx: 0
http.192.168.1.186_8000.downstream_rq_2xx: 0
http.192.168.1.186_8000.downstream_rq_3xx: 0
http.192.168.1.186_8000.downstream_rq_4xx: 0
http.192.168.1.186_8000.downstream_rq_5xx: 0
http.192.168.1.186_8000.downstream_rq_active: 0
http.192.168.1.186_8000.downstream_rq_http1_total: 0
http.192.168.1.186_8000.downstream_rq_http2_total: 0
http.192.168.1.186_8000.downstream_rq_idle_timeout: 0
http.192.168.1.186_8000.downstream_rq_non_relative_path: 0
http.192.168.1.186_8000.downstream_rq_response_before_rq_complete: 0
http.192.168.1.186_8000.downstream_rq_rx_reset: 0
http.192.168.1.186_8000.downstream_rq_too_large: 0
http.192.168.1.186_8000.downstream_rq_total: 0
http.192.168.1.186_8000.downstream_rq_tx_reset: 0
http.192.168.1.186_8000.downstream_rq_ws_on_non_ws_route: 0
http.192.168.1.186_8000.fault.aborts_injected: 0
http.192.168.1.186_8000.fault.delays_injected: 0
http.192.168.1.186_8000.no_cluster: 0
http.192.168.1.186_8000.no_route: 0
http.192.168.1.186_8000.rq_direct_response: 0
http.192.168.1.186_8000.rq_redirect: 0
http.192.168.1.186_8000.rq_total: 0
http.192.168.1.186_8000.rs_too_large: 0
http.192.168.1.186_8000.tracing.client_enabled: 0
http.192.168.1.186_8000.tracing.health_check: 0
http.192.168.1.186_8000.tracing.not_traceable: 0
http.192.168.1.186_8000.tracing.random_sampling: 0
http.192.168.1.186_8000.tracing.service_forced: 0
<SNIP>
listener.192.168.1.186_8000.downstream_cx_active: 0
listener.192.168.1.186_8000.downstream_cx_destroy: 0
listener.192.168.1.186_8000.downstream_cx_total: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_1xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_2xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_3xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_4xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_5xx: 0
listener.192.168.1.186_8000.no_filter_chain_match: 0
listener.192.168.1.186_8000.ssl.connection_error: 0
listener.192.168.1.186_8000.ssl.fail_verify_cert_hash: 0
listener.192.168.1.186_8000.ssl.fail_verify_error: 0
listener.192.168.1.186_8000.ssl.fail_verify_no_cert: 0
listener.192.168.1.186_8000.ssl.fail_verify_san: 0
listener.192.168.1.186_8000.ssl.handshake: 0
listener.192.168.1.186_8000.ssl.no_certificate: 0
listener.192.168.1.186_8000.ssl.session_reused: 0
<SNIP>
http.192.168.1.186_8000.downstream_cx_length_ms: No recorded values
http.192.168.1.186_8000.downstream_rq_time: No recorded values
<SNIP>
listener.192.168.1.186_8000.downstream_cx_length_ms: No recorded values

Wait for --max-workload-cert-ttl to expire and verify the certificate expiration:

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout  | grep Validity -A 2
        Validity
            Not Before: Aug  9 16:16:12 2018 GMT
            Not After : Aug  9 16:19:12 2018 GMT

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- date
Thu Aug  9 16:25:46 UTC 2018

Since the certificate is expired, make a connection and you should receive a HTTP 503 error:

$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl httpbin:8000/headers -o /dev/null -s -w '%{http_code}\n'
503

Recheck the httpbin proxy stats and you should see no active connections (cx), even though a cx attempt was made:

$ kubectl exec httpbin-77647f7b59-ttbx7 -c istio-proxy curl http://127.0.0.1:15000/stats
<SNIP>
http.192.168.1.186_8000.downstream_cx_destroy: 1
http.192.168.1.186_8000.downstream_cx_destroy_active_rq: 0
http.192.168.1.186_8000.downstream_cx_destroy_local: 0
http.192.168.1.186_8000.downstream_cx_destroy_local_active_rq: 0
http.192.168.1.186_8000.downstream_cx_destroy_remote: 1
http.192.168.1.186_8000.downstream_cx_destroy_remote_active_rq: 0
http.192.168.1.186_8000.downstream_cx_drain_close: 0
http.192.168.1.186_8000.downstream_cx_http1_active: 0
http.192.168.1.186_8000.downstream_cx_http1_total: 0
http.192.168.1.186_8000.downstream_cx_http2_active: 0
http.192.168.1.186_8000.downstream_cx_http2_total: 0
http.192.168.1.186_8000.downstream_cx_idle_timeout: 0
http.192.168.1.186_8000.downstream_cx_protocol_error: 0
http.192.168.1.186_8000.downstream_cx_rx_bytes_buffered: 0
http.192.168.1.186_8000.downstream_cx_rx_bytes_total: 0
http.192.168.1.186_8000.downstream_cx_ssl_active: 0
http.192.168.1.186_8000.downstream_cx_ssl_total: 1
http.192.168.1.186_8000.downstream_cx_total: 1
<SNIP>
listener.192.168.1.186_8000.downstream_cx_active: 0
listener.192.168.1.186_8000.downstream_cx_destroy: 1
listener.192.168.1.186_8000.downstream_cx_total: 1
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_1xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_2xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_3xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_4xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_5xx: 0
listener.192.168.1.186_8000.no_filter_chain_match: 0
listener.192.168.1.186_8000.ssl.connection_error: 1
<SNIP>
http.192.168.1.186_8000.downstream_cx_length_ms: P0(nan,3) P25(nan,3.025) P50(nan,3.05) P75(nan,3.075) P90(nan,3.09) P95(nan,3.095) P99(nan,3.099) P99.9(nan,3.0999) P100(nan,3.1)
http.192.168.1.186_8000.downstream_rq_time: No recorded values
<SNIP>

Re-enable Citadel, delete the secret containing the expired secret and verify the proxy received a new, valid certificate:

$ kubectl exec httpbin-77647f7b59-ttbx7 -c istio-proxy curl http://127.0.0.1:15000/certs
{
  "ca_cert": "Certificate Path: /etc/certs/root-cert.pem, Serial Number: 426f4b1a053839cbf9f7ef7409591a86, Days until Expiration: 363",
  "cert_chain": "Certificate Path: /etc/certs/cert-chain.pem, Serial Number: e01b9bb848de46e9ecbfa99612f21623, Days until Expiration: 0"
}

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout  | grep Validity -A 2
        Validity
            Not Before: Aug  9 16:52:39 2018 GMT
            Not After : Aug  9 16:55:39 2018 GMT

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- date
Thu Aug  9 16:52:46 UTC 2018

Now that the sleep and httpbin proxies are using valid certificates, test connectivity again. This time you should reeive an HTTP 200 status code:

kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl httpbin:8000/headers -o /dev/null -s -w '%{http_code}\n'
200

Recheck the proxy stats and you should observe an active connection (cx), along with the number of bytes sent/received indicating data was passed over the cx:

$ kubectl exec httpbin-77647f7b59-ttbx7 -c istio-proxy curl http://127.0.0.1:15000/stats
<SNIP>
http.192.168.1.186_8000.downstream_cx_active: 1
http.192.168.1.186_8000.downstream_cx_destroy: 1
http.192.168.1.186_8000.downstream_cx_destroy_active_rq: 0
http.192.168.1.186_8000.downstream_cx_destroy_local: 0
http.192.168.1.186_8000.downstream_cx_destroy_local_active_rq: 0
http.192.168.1.186_8000.downstream_cx_destroy_remote: 1
http.192.168.1.186_8000.downstream_cx_destroy_remote_active_rq: 0
http.192.168.1.186_8000.downstream_cx_drain_close: 0
http.192.168.1.186_8000.downstream_cx_http1_active: 1
http.192.168.1.186_8000.downstream_cx_http1_total: 1
http.192.168.1.186_8000.downstream_cx_http2_active: 0
http.192.168.1.186_8000.downstream_cx_http2_total: 0
http.192.168.1.186_8000.downstream_cx_idle_timeout: 0
http.192.168.1.186_8000.downstream_cx_protocol_error: 0
http.192.168.1.186_8000.downstream_cx_rx_bytes_buffered: 796
http.192.168.1.186_8000.downstream_cx_rx_bytes_total: 796
http.192.168.1.186_8000.downstream_cx_ssl_active: 1
http.192.168.1.186_8000.downstream_cx_ssl_total: 2
http.192.168.1.186_8000.downstream_cx_total: 2
http.192.168.1.186_8000.downstream_cx_tx_bytes_buffered: 0
http.192.168.1.186_8000.downstream_cx_tx_bytes_total: 533
http.192.168.1.186_8000.downstream_cx_websocket_active: 0
http.192.168.1.186_8000.downstream_cx_websocket_total: 0
http.192.168.1.186_8000.downstream_flow_control_paused_reading_total: 0
http.192.168.1.186_8000.downstream_flow_control_resumed_reading_total: 0
http.192.168.1.186_8000.downstream_rq_1xx: 0
http.192.168.1.186_8000.downstream_rq_2xx: 1
http.192.168.1.186_8000.downstream_rq_3xx: 0
http.192.168.1.186_8000.downstream_rq_4xx: 0
http.192.168.1.186_8000.downstream_rq_5xx: 0
http.192.168.1.186_8000.downstream_rq_active: 0
http.192.168.1.186_8000.downstream_rq_http1_total: 1
http.192.168.1.186_8000.downstream_rq_http2_total: 0
http.192.168.1.186_8000.downstream_rq_idle_timeout: 0
http.192.168.1.186_8000.downstream_rq_non_relative_path: 0
http.192.168.1.186_8000.downstream_rq_response_before_rq_complete: 0
http.192.168.1.186_8000.downstream_rq_rx_reset: 0
http.192.168.1.186_8000.downstream_rq_too_large: 0
http.192.168.1.186_8000.downstream_rq_total: 1
http.192.168.1.186_8000.downstream_rq_tx_reset: 0
http.192.168.1.186_8000.downstream_rq_ws_on_non_ws_route: 0
http.192.168.1.186_8000.fault.aborts_injected: 0
http.192.168.1.186_8000.fault.delays_injected: 0
http.192.168.1.186_8000.no_cluster: 0
http.192.168.1.186_8000.no_route: 0
http.192.168.1.186_8000.rq_direct_response: 0
http.192.168.1.186_8000.rq_redirect: 0
http.192.168.1.186_8000.rq_total: 1
http.192.168.1.186_8000.rs_too_large: 0
http.192.168.1.186_8000.tracing.client_enabled: 0
http.192.168.1.186_8000.tracing.health_check: 0
http.192.168.1.186_8000.tracing.not_traceable: 0
http.192.168.1.186_8000.tracing.random_sampling: 1
http.192.168.1.186_8000.tracing.service_forced: 0
<SNIP>
listener.192.168.1.186_8000.downstream_cx_active: 1
listener.192.168.1.186_8000.downstream_cx_destroy: 1
listener.192.168.1.186_8000.downstream_cx_total: 2
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_1xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_2xx: 1
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_3xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_4xx: 0
listener.192.168.1.186_8000.http.192.168.1.186_8000.downstream_rq_5xx: 0
listener.192.168.1.186_8000.no_filter_chain_match: 0
listener.192.168.1.186_8000.ssl.connection_error: 1
listener.192.168.1.186_8000.ssl.fail_verify_cert_hash: 0
listener.192.168.1.186_8000.ssl.fail_verify_error: 0
listener.192.168.1.186_8000.ssl.fail_verify_no_cert: 0
listener.192.168.1.186_8000.ssl.fail_verify_san: 0
listener.192.168.1.186_8000.ssl.handshake: 1
listener.192.168.1.186_8000.ssl.no_certificate: 0
listener.192.168.1.186_8000.ssl.session_reused: 0
<SNIP>
http.192.168.1.186_8000.downstream_cx_length_ms: No recorded values
http.192.168.1.186_8000.downstream_rq_time: No recorded values
<SNIP>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment