Created
January 26, 2022 19:46
-
-
Save emmeowzing/5202317fea9ade82cc130f0a3c29bdf4 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Istio Workshop | |
============== | |
Lawrence @ Solo.io | |
Jad Sadek @ Solo.io | |
Hiring at Solo.io | |
At Solo we're building the Gloo platform | |
Gloo mesh, edge, portal, extensions | |
Tests will be sent out toward end of the workshop | |
Requires >=80% correct. | |
Retake is allowed (with a new email). Expect badge within a few weeks. | |
Challenges with microservices | |
How do you route traffic intelligently | |
Service mesh is a programmable interface | |
Netflix in the early days open sourced their software stack that was used to solve these problems inherent to microservices. | |
Was written in Java, which is great only if you're an all-Java shop. | |
Service discovery | |
Resiliency | |
Kubernetes has become the orchestration default in many organizations. | |
Service mesh is meant to solve things like | |
Service discovery | |
Secure service-to-service communication | |
traffic control / shaping | |
API / programmable | |
... | |
Proxy in a pod gets its configuration from the K8s control plane. | |
Service mesh ecosystem has a lot of players now, including | |
Kong | |
Envoy proxy (open source) | |
Linkerd | |
Istio (open source as well) | |
... many others | |
Istio has a very large open source comm | |
https://github.com/istio | |
Istio arch | |
Sidecar proxy is injected into a pod | |
Mesh traffic travels between proxies, which are loaded as sidecars to actual service containers. | |
istiod | |
Lives in the control plane. | |
Pilot | |
Galley | |
Citadel - certificate piece of the control plane. | |
Sidecar-injector | |
As you define configuration via CRDs | |
Envoy config? | |
Istio also has a gateway config | |
You can have an Envoy proxy that acts as a gateway, for traffic entering and leaving the mesh that contains pods (and their proxies therein). | |
Enterprise Istio Production Support (Gloo Mesh) | |
Istio common adoption patterns - layout of this talk | |
Gateway Only (Lab 2) | |
Mesh Observability (Lab 3) | |
Secure Mesh (Lab 4) | |
Control Traffic (Lab 5) | |
We can route our ingress like you would with an Ingress Controller, but in this case we'll be using Istio's ingress controller. | |
Using Instruqt to deliver the environments | |
Provisioning a bunch of VMs on their infrastructure. | |
We'll be given a virtual terminal to a K8s cluster. | |
Pretty cool! I used this with | |
Lab 1: | |
====== | |
Pre-check | |
Methods for installing Istio. | |
Istio operator | |
Helm | |
Istioctl | |
K8s token for dashboard: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9zem1iQjZ6VzhrMm51M3VfUFVjdzVuZU1RQlgyZFlJNGZfTXBuSzBrLTAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLW5zODZ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYmRhY2ZkZi01NzZlLTQyMDMtOTVkZi1mZDZlMjJlZWFmMzAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.OhR4SRSLp7zU2yW3PtRHTy44H2xjDKcIrWsFAR_eqjyHai43yyZdGFfMSGTjQy729U01jEHqLR8L1oVUiY5EM7NAG0E-dX1U2CSDaWr0nJEaIIXzn-7eKMJhLqPOKNaIf-nb0kU_MvGzDd0sAgs5L4oTJ1BPS9burOTN-KPs8oYmXWfENnFv8fONMbxYb5-u3oQ_Kr5t1-19c1xzyvZfxgvgZUdXOZAQF91CwtsvXfa0NrXvBYqfL73wRFVnOgRtAj_gDZBMua1FMnqmDYfC_Rf-Tsj2qePsjVqA5Wwi8EqPOve6H3aUYMAT66TeKpMg3VhEIohF8nfJ9BoUVRaS2A | |
root@kubernetes:~/istio-1.12.1# istioctl version | |
no running Istio pods in "istio-system" | |
1.12.1 | |
root@kubernetes:~/istio-1.12.1# ln -s $(pwd)/bin/istioctl /usr/bin/istioctl | |
root@kubernetes:~/istio-1.12.1# cd ../ | |
root@kubernetes:~# istioctl | |
Istio configuration command line utility for service operators to | |
... | |
There are different default profiles you can install on a cluster - | |
root@kubernetes:~# istioctl profile list | |
Istio configuration profiles: | |
default | |
demo | |
empty | |
external | |
minimal | |
openshift | |
preview | |
remote | |
So for example, if you use openshift, there's a default profile for it. | |
We're using the demo profile. | |
# istioctl install --set profile=demo | |
This will install the Istio 1.12.1 demo profile with ["Istio core" "Istiod" "Ingress gateways" "Egress gateways"] components into the cluster. Proceed? (y/N) y | |
✔ Istio core installed | |
... | |
# kubectl get all,cm,secrets,envoyfilters -n istio-system | |
The command above will show you all the items that were installed in this new namespace related to Istio. | |
The main way to interact or configure Istio is with CRDs - | |
# kubectl get crds -n istio-system | |
NAME CREATED AT | |
addons.k3s.cattle.io 2022-01-25T18:22:35Z | |
helmcharts.helm.cattle.io 2022-01-25T18:22:36Z | |
helmchartconfigs.helm.cattle.io 2022-01-25T18:22:36Z | |
authorizationpolicies.security.istio.io 2022-01-25T18:29:47Z | |
destinationrules.networking.istio.io 2022-01-25T18:29:47Z | |
envoyfilters.networking.istio.io 2022-01-25T18:29:47Z | |
gateways.networking.istio.io 2022-01-25T18:29:48Z | |
istiooperators.install.istio.io 2022-01-25T18:29:48Z | |
peerauthentications.security.istio.io 2022-01-25T18:29:48Z | |
requestauthentications.security.istio.io 2022-01-25T18:29:48Z | |
serviceentries.networking.istio.io 2022-01-25T18:29:48Z | |
sidecars.networking.istio.io 2022-01-25T18:29:48Z | |
telemetries.telemetry.istio.io 2022-01-25T18:29:48Z | |
virtualservices.networking.istio.io 2022-01-25T18:29:48Z | |
wasmplugins.extensions.istio.io 2022-01-25T18:29:48Z | |
workloadentries.networking.istio.io 2022-01-25T18:29:48Z | |
workloadgroups.networking.istio.io 2022-01-25T18:29:48Z | |
Allows us to use the K8s API to create and modify istio virtual services. | |
kiali is the Istio-specific dashboard/monitor you can use. | |
Yaeger is distributed tracing. | |
Istio will | |
Lab 2: | |
====== | |
We'll be talking about gateways in general. | |
Ingress and egress gateways. | |
No sidecar proxies (no mesh) | |
Control inbound and outbound traffic. | |
On vanilla K8s, you can use NodePort, LoadBalancer, etc. | |
A flow could look like | |
Istio Ingress -> Service 1 -> Service 2 -> Istio Egress | |
Istio network resources include Gateway itself, a virtual service, destination rule, service entry, and sidecar | |
root@kubernetes:~/istio-workshops/istio-basics# istioctl proxy-config routes deploy/istio-ingressgateway.istio-system | |
NAME DOMAINS MATCH VIRTUAL SERVICE | |
http.8080 istioinaction.io /* web-api-gw-vs.istioinaction | |
* /stats/prometheus* | |
* /healthz/ready* | |
So basically what we've done is given the Gateway kind/object the ability to route paths to a service inside the cluster. | |
To secure traffic - | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl create -n istio-system secret tls istioinaction-cert --key labs/02/certs/istioinaction.io.key --cert labs/02/certs/istioinaction.io.crt | |
secret/istioinaction-cert created | |
root@kubernetes:~/istio-workshops/istio-basics# k describe secret istioinaction-cert -n istio-system | |
Name: istioinaction-cert | |
Namespace: istio-system | |
Labels: <none> | |
Annotations: <none> | |
Type: kubernetes.io/tls | |
Data | |
==== | |
tls.crt: 1212 bytes | |
tls.key: 1679 bytes | |
Create a tls secret with key and cert. | |
root@kubernetes:~/istio-workshops/istio-basics# cat labs/02/web-api-gw-https.yaml | |
apiVersion: networking.istio.io/v1beta1 | |
kind: Gateway | |
metadata: | |
name: web-api-gateway | |
namespace: istioinaction | |
spec: | |
selector: | |
istio: ingressgateway | |
servers: | |
- port: | |
number: 443 | |
name: https | |
protocol: HTTPS | |
hosts: | |
- "istioinaction.io" | |
tls: | |
mode: SIMPLE | |
credentialName: istioinaction-cert | |
So here we've added a port with protocol HTTPS to a new gateway. | |
Lab 3: | |
====== | |
So in Lab 2, we've gotten traffic into our cluster, exposing a service. And all we had to do was configure the Gateway object, leaving the underlying applications relatively unchanged. | |
Observability in-depth | |
Incrementally adding services to the mesh. | |
In order to deploy new services into the mesh, there isn't much we have to do. | |
Add a named service port for each service port. | |
Pod must have a service associated. | |
Label deployments with app and version. | |
matchLabels | |
Istio is very much a Service-oriented service, so every pod should have a service associated with it. | |
Don't use UID 1337 - shouldn't necessarily be a problem. 1337 is used in order to handle traffic redirection. | |
All network traffic needs to go through the Istio proxy, which uses iptables. | |
Do you have NET_ADMIN + NET_RAW privilege? | |
Istio needs the privileges to be able to modify iptables rules. | |
Again, in Lab 2, we handled ingress flow. Now, we're going to add a proxy to each service. | |
This way, queries will be going through a proxy. | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl label namespace istioinaction istio-injection=enabled | |
namespace/istioinaction labeled | |
This enables automatic sidecar injection. | |
Istio uses labels to enable features and automate things. | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl get namespace -L istio-injectionNAME STATUS AGE ISTIO-INJECTION | |
default Active 45m | |
kube-system Active 45m | |
kube-public Active 45m | |
kube-node-lease Active 45m | |
kubernetes-dashboard Active 45m | |
istio-system Active 38m | |
istioinaction Active 21m enabled | |
Note that it's enabled. | |
There is a plugin, Istio CNI, that's available as a work-around if you cannot enable the above network privileges for Istio in a very tight environment. | |
Now, if we go restart the service, it'll inject the proper container. | |
Containers: | |
web-api: | |
Container ID: containerd://15729f96b8da626252572d74f8e6847cbc3535e7c347e14f08f14f46affa207d | |
Image: nicholasjackson/fake-service:v0.7.8 | |
Image ID: docker.io/nicholasjackson/fake-service@sha256:614f71035f1adf4d94b189a3e0cc7b49fe783cf97b6b00b5e24f3c235f4ea14e | |
Port: 8081/TCP | |
Host Port: 0/TCP | |
State: Running | |
Started: Tue, 25 Jan 2022 19:10:24 +0000 | |
Ready: True | |
Restart Count: 0 | |
Environment: | |
LISTEN_ADDR: 0.0.0.0:8081 | |
UPSTREAM_URIS: http://recommendation:8080 | |
NAME: web-api | |
MESSAGE: Hello From Web API | |
Mounts: | |
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m2gt6 (ro) | |
istio-proxy: | |
Container ID: containerd://6b46ccfb0b00e5276ba747fe9ec26ded576d4c17e1a12cc98b99715c2f2f8768 | |
Image: docker.io/istio/proxyv2:1.12.1 | |
Image ID: docker.io/istio/proxyv2@sha256:4704f04f399ae24d99e65170d1846dc83d7973f186656a03ba70d47bd1aba88f | |
Port: 15090/TCP | |
Host Port: 0/TCP | |
Args: | |
proxy | |
sidecar | |
--domain | |
$(POD_NAMESPACE).svc.cluster.local | |
--proxyLogLevel=warning | |
--proxyComponentLogLevel=misc:error | |
--log_output_level=default:info | |
--concurrency | |
2 | |
State: Running | |
Started: Tue, 25 Jan 2022 19:10:24 +0000 | |
Ready: True | |
Restart Count: 0 | |
Limits: | |
cpu: 2 | |
memory: 1Gi | |
Requests: | |
cpu: 10m | |
memory: 40Mi | |
... | |
Now there are 2 containers! Automatically. So this is part of the mesh, now. | |
initContainers: responsible for handling iptables configuration. It uses the same image, but just runs one command. | |
# kubectl exec deploy/web-api -c istio-proxy -n istioinaction -- /usr/local/bin/pilot-agent istio-iptables --help | |
So this command redirects via the pilot-agent, the traffic from the application container into the proxy container, to add the application to the mesh. | |
The istio/proxyv2 image is handling the data. | |
Restarting all the services now, we'll find that they all list 2/2 containers present, because the sidecars/istio proxy/Envoy are being injected into the Pods, and with initContainers redirecting all traffic to travel through the proxy in the Pod via iptables. And all you have to do at this point is just restart the Pods in this namespace (-n istioinaction) once the namespace is labeled/enabled. | |
Now you can also use Yaeger to trace requests, their duration, and the objects in the K8s cluster they hit (e.g.: | |
istio-ingressgateway.istio-system: web-api.istioinaction.svc.cluster.local:8080/* | |
Sidecars are able to export these tracing data. | |
Lab 4: | |
====== | |
Incrementally add secure communication throughout a cluster. mTLS | |
Auto mTLS to set client security settings based on target service. | |
We'll deploy a sleep pod that allows us to jump/exec into a pod. | |
istio can automatically implement mTLS via labeling of namespaces. | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl get peerauthentication --all-namespaces | |
No resources found | |
peerauthentication is the CRD that sets up authentication. | |
In this first section, we'll enforce a strict authentication. | |
kubectl apply -n istio-system -f - <<EOF | |
apiVersion: "security.istio.io/v1beta1" | |
kind: "PeerAuthentication" | |
metadata: | |
name: "default" | |
spec: | |
mtls: | |
mode: STRICT | |
EOF | |
Now, | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl get peerauthentication --all-namespaces | |
NAMESPACE NAME MODE AGE | |
istio-system default STRICT 21s | |
So we've enforced on a specific namespace. | |
Let's deploy a sleep pod and exec into it - | |
root@kubernetes:~/istio-workshops/istio-basics# k get pods -n default | |
NAME READY STATUS RESTARTS AGE | |
sleep-6fb84cbcf-dsrbk 1/1 Running 0 8s | |
So now, | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl exec deploy/sleep -n default -- curl http://web-api.istioinaction:8080/ | |
This ^ command will fail, whereas | |
root@kubernetes:~/istio-workshops/istio-basics# kubectl exec deploy/sleep -n istioinaction -- curl http://web-api.istioinaction:8080/ | |
This ^ will succeed and dump the same data as before. | |
Kiali gives us a way to see mTLS traffic by enabling traffic animation (Display dropdown) and Security. | |
So the identity of the workload is built up through the mesh it's a part of. | |
Mesh identity. | |
You can use peerauthentication to enforce mTLS. | |
You can now write authoritzation policies based on this mesh identity. | |
So you could enforce the idea that, e.g., I only want X service to accept connections or requests from Y service, and no other. | |
Lab 5: | |
====== | |
You are now ready to take control of how traffic flows between your services. | |
http://jsonplaceholder.typicode.com/ | |
root@kubernetes:~/istio-workshops/istio-basics# cat labs/05/purchase-history-vs-all-v1.yaml | |
apiVersion: networking.istio.io/v1beta1 | |
kind: VirtualService | |
metadata: | |
name: purchase-history-vs | |
spec: | |
hosts: | |
- purchase-history.istioinaction.svc.cluster.local | |
http: | |
- route: | |
- destination: | |
host: purchase-history.istioinaction.svc.cluster.local | |
subset: v1 | |
port: | |
number: 8080 | |
weight: 100 | |
With the above, we can say that we only want v1 to receive all traffic (100%). | |
Whoa. So we can also mirror, say, mirror 40% of the traffic through Service B that's intended for Service A, with Service A still getting 100% of that traffic. | |
Virtual Services are rules for routing traffic. E.g., Canary deployments. | |
Destination Rules are rules applied on destinations for traffic leaving a proxy. | |
Service entry is how you define services that are external to our mesh. | |
By default, Service's are part of the service registry of Istio. So for external services that aren't native K8s services, you can use service entries to configure that. | |
Sidecar proxy configuration - in the 2nd level workshop that you can sign up for. | |
Another thing you can do with Service entries is register external services that traffic can communicate with. Otherwise, you don't want traffic to go outbound. | |
So by adding a metadata.version label, we can target Deployments. | |
By default, K8s will go to round robin between our Deployments. By creating a DestinationRule object, we've created 2 subsets of objects, separated by the metadata.labels.version value on our Deployments. Then, we've added a VirtualService object that says, if there exists any v1 objects/Pods, send 100% of the traffic to these pods, and no others, overriding the default action to go to round robin. | |
From here, we can now deploy v2 of our app in the same namespace/Service. | |
K8s doesn't have lifecycle configuration for containers. However, Istio - | |
> From the holdApplicationUntilProxyStarts annotation below, you have configured the purchase-history-v2 pod to delay starting until the istio-proxy container reaches its Running status | |
root@kubernetes:~/istio-workshops/istio-basics# cat labs/05/purchase-history-v2-updated.yaml | |
apiVersion: apps/v1 | |
kind: Deployment | |
metadata: | |
name: purchase-history-v2 | |
labels: | |
app: purchase-history | |
version: v2 | |
spec: replicas: 1 selector: | |
matchLabels: | |
app: purchase-history | |
version: v2 | |
template: | |
metadata: | |
labels: | |
app: purchase-history | |
version: v2 | |
annotations: | |
proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' | |
... | |
So we have a label here that will ensure the application does not start until our injected proxy container has started. | |
So this solves the ordering problem. | |
Now, we'll route traffic to application v2 iff there's a header field with user == tom. | |
root@kubernetes:~/istio-workshops/istio-basics# cat labs/05/purchase-history-vs-all-v1-header-v2.yaml | |
apiVersion: networking.istio.io/v1beta1 | |
kind: VirtualService | |
metadata: | |
name: purchase-history-vs | |
spec: | |
hosts: | |
- purchase-history.istioinaction.svc.cluster.local | |
http: | |
- match: | |
- headers: | |
user: | |
exact: Tom | |
route: | |
- destination: | |
host: purchase-history.istioinaction.svc.cluster.local | |
subset: v2 | |
port: | |
number: 8080 | |
- route: | |
- destination: | |
host: purchase-history.istioinaction.svc.cluster.local | |
subset: v1 | |
port: | |
number: 8080 | |
weight: 100 | |
This ensures there's a route to subset v2 for that user. | |
For Canaries, we can shift to percentages of traffic instead of only specific instances. | |
root@kubernetes:~/istio-workshops/istio-basics# cat labs/05/purchase-history-vs-20-v2.yaml | |
apiVersion: networking.istio.io/v1beta1 | |
kind: VirtualService | |
metadata: | |
name: purchase-history-vs | |
spec: | |
hosts: | |
- purchase-history.istioinaction.svc.cluster.local | |
http: | |
- route: | |
- destination: | |
host: purchase-history.istioinaction.svc.cluster.local | |
subset: v1 | |
port: | |
number: 8080 | |
weight: 80 | |
- destination: | |
host: purchase-history.istioinaction.svc.cluster.local | |
subset: v2 | |
port: | |
number: 8080 | |
weight: 20 | |
root@kubernetes:~/istio-workshops/istio-basics# for i in {1..100}; do curl -s --cacert ./labs/02/certs/ca/root-ca.crt -H "Host: istioinaction.io" https://istioinaction.io:$SECURE_INGRESS_PORT --resolve istioinaction.io:$SECURE_INGRESS_PORT:$GATEWAY_IP|grep -oP "(v1|v2)"; done | sort -n | uniq -c | |
150 v1 | |
50 v2 | |
For external services, we can register an external service with something like the following - | |
root@kubernetes:~/istio-workshops/istio-basics# cat labs/05/typicode-se.yaml | |
apiVersion: networking.istio.io/v1beta1 | |
kind: ServiceEntry | |
metadata: | |
name: typicode-svc-https | |
spec: | |
hosts: | |
- jsonplaceholder.typicode.com | |
location: MESH_EXTERNAL | |
ports: | |
- number: 443 | |
name: https | |
protocol: TLS | |
resolution: DNS | |
Otherwise, requests won't work. Now, we'll get a 200 response code. | |
Kiali will show the registered external service here as well in its UI. | |
You don't need an egress gateway, but it allows for even finer-grained control over what in the mesh can access external services. | |
Lab 6: | |
====== | |
N/A |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This gist includes my notes from an Istio workshop, which you can learn more about here -
https://www.istioworkshop.io/