-
-
Save vfarcic/e9ebcaa301c95986fe7bd83b0ee079a0 to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/e9ebcaa301c95986fe7bd83b0ee079a0 | |
###################################################################################### | |
# Applying Kubernetes Policies On Infra And Apps By Combining Kyverno And Crossplane # | |
# https://youtu.be/PVjaJwEJ5mQ # | |
###################################################################################### | |
# References: | |
# - Crossplane - GitOps-based Infrastructure as Code through Kubernetes API: https://youtu.be/n8KjVmuHm7A | |
# - How To Shift Left Infrastructure Management Using Crossplane Composites: https://youtu.be/AtbS1u2j7po | |
# - Applying GitOps To Infrastructure With Flux And Crossplane: https://youtu.be/CNz52CPHZIM | |
# - Kyverno: https://youtu.be/DREjzfTzNpA | |
# - Upbound docs: https://cloud.upbound.io/docs/ | |
# - Crossplane docs: https://crossplane.io/docs | |
######### | |
# Setup # | |
######### | |
git clone \ | |
https://github.com/vfarcic/crossplane-kyverno-demo | |
cd crossplane-kyverno-demo | |
# Please watch https://youtu.be/mCesuGk-Fks if you are not familiar with k3d | |
# Feel free to use any other Kubernetes platform | |
kind create cluster --config kind.yaml | |
kubectl create namespace a-team | |
kubectl create namespace b-team | |
############# | |
# Setup AWS # | |
############# | |
# Replace `[...]` with your access key ID` | |
export AWS_ACCESS_KEY_ID=[...] | |
# Replace `[...]` with your secret access key | |
export AWS_SECRET_ACCESS_KEY=[...] | |
echo "[default] | |
aws_access_key_id = $AWS_ACCESS_KEY_ID | |
aws_secret_access_key = $AWS_SECRET_ACCESS_KEY | |
" | tee aws-creds.conf | |
kubectl create namespace upbound-system | |
kubectl --namespace upbound-system \ | |
create secret generic aws-creds \ | |
--from-file creds=./aws-creds.conf | |
#################### | |
# Setup Crossplane # | |
#################### | |
helm repo add upbound \ | |
https://charts.upbound.io/stable | |
helm repo update | |
helm upgrade --install \ | |
universal-crossplane upbound/universal-crossplane \ | |
--version 1.3.0-up.0 \ | |
--namespace upbound-system \ | |
--create-namespace \ | |
--wait | |
kubectl create namespace crossplane-system | |
kubectl apply --filename crossplane | |
# If the previous command threw an error `error: unable to recognize "crossplane/providers.yaml"`, the provider is still not up-and-running. | |
# Wait for a few moments and re-run the previous command. | |
################# | |
# Setup Kyverno # | |
################# | |
helm repo add \ | |
kyverno https://kyverno.github.io/kyverno/ | |
helm repo update | |
helm upgrade --install \ | |
kyverno kyverno/kyverno \ | |
--version v2.0 \ | |
--namespace kyverno \ | |
--create-namespace \ | |
--set validationFailureAction=enforce \ | |
--wait | |
kubectl apply --filename kyverno | |
########################## | |
# Clusters with policies # | |
########################## | |
cat infra/cluster-a-team.yaml | |
# We should using GitOps. Watch the Flux video. Link is in the description. | |
kubectl --namespace a-team \ | |
apply --filename infra/cluster-a-team.yaml | |
kubectl --namespace a-team \ | |
get clusterclaims,managed,providerconfigs,releases | |
kubectl --namespace a-team \ | |
get clusterclaims | |
# Repeat the previous command until the `CONTROLPLANE` column is set to `ACTIVE` | |
export KUBECONFIG=$PWD/kubeconfig.yaml | |
aws eks --region us-east-1 \ | |
update-kubeconfig \ | |
--name a-team | |
kubectl apply --filename my-app.yaml | |
####################### | |
# How did we get here # | |
####################### | |
cat crossplane/definition.yaml | |
cat crossplane/composition-eks.yaml | |
unset KUBECONFIG | |
############################### | |
# Policies for infrastructure # | |
############################### | |
cat infra/cluster-b-team.yaml | |
kubectl --namespace b-team \ | |
apply --filename infra/cluster-b-team.yaml | |
# Open `infra/cluster-b-team.yaml` in an editor and change `spec.parameters.nodeSize` to `medium` | |
kubectl --namespace b-team \ | |
apply --filename infra/cluster-b-team.yaml | |
########### | |
# Destroy # | |
########### | |
kubectl --namespace a-team \ | |
delete --filename infra/cluster-a-team.yaml | |
kubectl --namespace b-team \ | |
delete --filename infra/cluster-b-team.yaml | |
kubectl get managed | |
# Repeat until all the resources are removed | |
kind delete cluster | |
cat infra/cluster-b-team.yaml \ | |
| sed -e "s@nodeSize: .*@nodeSize: large@g" \ | |
| tee infra/cluster-b-team.yaml |
As yet further example, when put to use it could either be through the evaluation of a file or it might be individual values to ensure only the necessary credentials are pulled. Both approaches could look something like this.
Upload a file of environment variables with defined values
Evaluate the contents of the remotely stored files
Alternatively, store each secret individually and use jq to pull the secret based off the key in the list
You're right. We should not manage credentials in the way I did in this Gist. Nevertheless, that's only for demo purposes. I did not want to complicate that specific subject by adding additional ones (e.g., security) into the mix.
In the specific case of AWS, almost everyone has the credentials in AWS config instead of using env. vars. as I did in that demo. That is only slightly more secure, mostly because there are no traces of it in the shell history, but not secure nevertheless.
In a "real" corporate setting, all the credentials would be in the provider's encrypted key/value storage or in something like Hashi Vault and would not be accessible durectly by humans, but through pipelines or other automation tools.
I feel that solutions like OnePassword are better suited for personal information. I might be wrong though.
Secret management with bash script environment variables are always really annoying because, to my knowledge, there aren't really any good industry-wide standards that can be used everywhere. It would be interesting to see how you manage your secrets between both local, staging, and production. Recently, I started playing around with extending my use of OnePassword to include their OnePassword Kubernetes operator and OnePassword CLI. It seems like a nice solution that covers all possible bases and eliminates the need of saying
For further context, here is the relevant helper script I created for managing secrets locally last night.
This suggestion isn't the best solution. I think an ideal solution might override the default environment variable lookup to search the secret manager when the environment variable is missing. The secret manager could then be synchronized and used both locally as well as with remote application workloads. Finally, it would be further optimized by creating a well-known standard or schema that allow everyone to have shared secret keys with secret values unique to their particular SaaS accounts.
References: