- Install Flux CLI and Kind
- Make Personal Access Token for creating repositories
- Export env vars locally
- Create local demo cluster
- Simple bootstrap
- Clone the newly created git repo to your local workspace
- Lets create a Helm release the most common way, using the Helm CLI
- Now lets convert these to declarative CRs that Flux understands
- Lets go ahead and push this to Git
- Lets check out the magic
- Change a new Helm release value through Git
- Pause and resume
- Cleanup demo cluster π§Ή
- Disaster recovery β
- Wrap up
$ brew upgrade fluxcd/tap/flux kind
$ brew reinstall fluxcd/tap/flux kind
$ flux --version && kind --version
flux version 0.29.4
kind version 0.12.0
- Generate new token in dev settings
- Check all permissions under repo & save
- Copy PAT to buffer
I've done this in advance for now.
π‘ If you want to show during a demo, follow best security practices by making the PAT off camera - or copy from a secure password app on camera etcΒ β then read to var silently
read -s
, then export var.
$ export GITHUB_TOKEN=[paste PAT]
$ echo -n $GITHUB_TOKEN | wc -c
40
$ kind create cluster
(took 40s)
π‘ The more complex your org is, the more complex your directory structure and patterns usually are.
There is no gold standard.
Flux is not opinionated about how directories are structured, rather it tries to be as flexible as possible to accommodate different patterns.
$ flux bootstrap github \
--interval 10s \
--owner scottrigby --personal \
--repository flux-for-helm-users \
--branch main \
--path=clusters/dev
βΊ connecting to github.com
β repository "https://github.com/scottrigby/flux-for-helm-users" created
βΊ cloning branch "main" from Git repository "https://github.com/scottrigby/flux-for-helm-users.git"
β cloned repository
βΊ generating component manifests
β generated component manifests
β committed sync manifests to "main" ("42a5e71e792cf3ca0393fefea4c4375e72d9fc47")
βΊ pushing component manifests to "https://github.com/scottrigby/flux-for-helm-users.git"
β installed components
β reconciled components
βΊ determining if source secret "flux-system/flux-system" exists
βΊ generating source secret
β public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMbDpSb+E912hnXZWX/x9RFWPscqsTJ/8bzgYLgYEywpkWwQNZVCjdvhLiNEexXMqk5IO3JxF9ScAa76IB6kYRFZ8WlGwoBNINU2HcXmtJF/9LZgUKzF53ioK9esCO+rYw==
β configured deploy key "flux-system-main-flux-system-./clusters/dev" for "https://github.com/scottrigby/flux-for-helm-users"
βΊ applying source secret "flux-system/flux-system"
β reconciled source secret
βΊ generating sync manifests
β generated sync manifests
β committed sync manifests to "main" ("055e5edfbace022504101c763b65b1f7c2134187")
βΊ pushing sync manifests to "https://github.com/scottrigby/flux-for-helm-users.git"
βΊ applying sync manifests
β reconciled sync configuration
β waiting for Kustomization "flux-system/flux-system" to be reconciled
β Kustomization reconciled successfully
βΊ confirming components are healthy
β helm-controller: deployment ready
β kustomize-controller: deployment ready
β notification-controller: deployment ready
β source-controller: deployment ready
β all components are healthy
(took 1m3s)
$ cd ~/code/github.com/scottrigby \
&& git clone [email protected]:scottrigby/flux-for-helm-users.git \
&& cd flux-for-helm-users
$ tree
.
βββ clusters
βββ dev
βββ flux-system
βββ gotk-components.yaml
βββ gotk-sync.yaml
βββ kustomization.yaml
3 directories, 3 files
Remember that we set custom values. We will get back to this later.
helm repo add podinfo https://stefanprodan.github.io/podinfo
Lets set some values to make this fun.
π‘ Helm CLI is great to show all the available options in a chart:
$ helm show values podinfo --repo https://stefanprodan.github.io/podinfo
# Default values for podinfo.
replicaCount: 1
logLevel: info
ui:
color: "#34577c"
message: ""
logo: ""
etcβ¦
$ helm upgrade -i my-release podinfo/podinfo \
--set replicaCount=2 \
--set logLevel=debug \
--set ui.color='red'
Release "my-release" does not exist. Installing it now.
β¦
Create a Source Custom Resource locally
π‘ The Helm CLI reads your locally defined Helm repo info (created in step 7). But the Flux Helm controller in your cluster will also need this same info.
We'll tell Flux about the Helm repo info with a HelmRepository
CR representing a Flux source.
Instead of helm add repo
you can use flux create source helm
to export the CRD to a local file:
$ flux create source helm podinfo \
--url=https://stefanprodan.github.io/podinfo \
--namespace=default \
--export > clusters/dev/source-helmrepo-podinfo.yaml
$ cat clusters/dev/source-helmrepo-podinfo.yaml
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: podinfo
namespace: default
spec:
interval: 1m0s
url: https://stefanprodan.github.io/podinfo
Next we'll create a HelmRelease
Custom Resource locally, using the same Helm values we earlier specified with the Helm CLI.
π‘ Helm CLI makes it very easy to get the values we earlier set for the release. We'll first export these to a file then take a look at its contents:
$ helm get values my-release -oyaml > my-values.yaml
$ cat my-values.yaml
logLevel: debug
replicaCount: 2
ui:
color: red
And again Flux CLI makes it easy to create the CR. You may also do this by hand, or with an IDE (for example with the VSCode Flux plugin), but the CLI command eases this:
$ flux create helmrelease my-release \
--release-name=my-release \
--source=HelmRepository/podinfo \
--chart=podinfo \
--chart-version=">4.0.0" \
--namespace=default \
--values my-values.yaml \
--export > ./clusters/dev/podinfo-helmrelease.yaml
$ cat clusters/dev/podinfo-helmrelease.yaml
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: my-release
namespace: default
spec:
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
version: '>4.0.0'
interval: 1m0s
values:
logLevel: debug
replicaCount: 2
ui:
color: red
We now no longer need the temporary values file. Lets be tidy:
rm my-values.yaml
$ git add clusters/dev
$ git commit -m 'Configure podinfo Helm Repo source and Helm Release'
$ git push
β¦
π‘ From this point on, you are now doing GitOps.
We can verify that Flux is now managing this Helm release.
π‘ If you want to immediately trigger reconciliation on a local demo cluster you can manually call
flux reconcile
. We shouldn't need to trigger that manually in this demo because we set the interval to 10s.In real word clusters there are important use cases for setting up webhook receivers to automate this immediacy, and there are equally important use cases for letting your defined sync interval run its course.
flux reconcile helmrelease my-release
Flux will add labels
$ kubectl get deploy my-release-podinfo -oyaml | grep flux
helm.toolkit.fluxcd.io/name: my-release
helm.toolkit.fluxcd.io/namespace: default
You believe me that we are now doing GitOps, but let's prove it.
Change a value in your HelmRelease
CR:
$ yq -i '.spec.values.ui.color = "blue"' clusters/dev/podinfo-helmrelease.yaml
$ git add clusters/dev
$ git diff --staged
diff --git a/clusters/dev/podinfo-helmrelease.yaml b/clusters/dev/podinfo-helmrelease.yaml
index b58eed2..5e1dc10 100644
--- a/clusters/dev/podinfo-helmrelease.yaml
+++ b/clusters/dev/podinfo-helmrelease.yaml
@@ -17,5 +17,4 @@ spec:
logLevel: debug
replicaCount: 2
ui:
- color: red
+ color: blue
$ git commit -m "blue me"
$ git push
We can see our Helm release incremented the revision:
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release default 3 2022-02-17 06:16:42.2293519 +0000 UTC deployed podinfo-6.0.3 6.0.3
And that the new release revision applied our change:
$ helm diff revision my-release 2 3
env:
- name: PODINFO_UI_COLOR
- value: red
+ value: blue
image: ghcr.io/stefanprodan/podinfo:6.0.3
imagePullPolicy: IfNotPresent
Let's get visual:
$ kubectl -n default port-forward deploy/my-release-podinfo 8080:9898
Forwarding from 127.0.0.1:8080 -> 9898
Browse to http://localhost:8080
Er mah gerd, it's blue!
Let's pretend we're in incident management and want to use Helm rollback
π‘ It's worth noting that Flux
HelmRelease
retains Helm release metadata and Helm's ability to manage the releases directly.There are various benefits to this, including the ability to continue using your favorite development tools that integrate with Helm releases (such as
helm list
,helm diff
plugin, etc).This is also helpful in production. For example, there are legitimate use cases for pausing GitOps operations and temporarily using the Helm CLI, such as incident management. Pausing and resuming GitOps reconciliation may be done on a per Custom Resource basis without affecting the others, for example a single
HelmRelease
:
$ flux suspend helmrelease my-release --namespace default
βΊ suspending helmreleases my-release in default namespace
β helmreleases suspended
Flux CLI has a handy flux get
feature, that gives additional info in output including whether or not reconciliation is suspended for a resource. Here we can see SUSPENDED
is True
.
$ flux get hr my-release --namespace default
NAME REVISION SUSPENDED READY MESSAGE
my-release 6.1.1 True True Release reconciliation succeeded
Let's rollback
to red using the Helm CLI, to show that it works.
$ helm rollback my-release 2
Rollback was a success! Happy Helming!
We can port forward again see that it worked:
$ kubectl -n default port-forward deploy/my-release-podinfo 8080:9898
Forwarding from 127.0.0.1:8080 -> 9898
OK yay, back to red! π
Once we're finished with our incident management window and want to resume GitOps reconciliation on that resource, we just need to resume again:
$ flux resume helmrelease my-release --namespace default
βΊ resuming helmreleases my-release in default namespace
β helmreleases resumed
β waiting for HelmRelease reconciliation
β HelmRelease reconciliation completed
β applied revision 6.0.3
We can see SUSPENDED
is False
, which means reconciliation has resumed.
flux get hr my-release --namespace default
NAME REVISION SUSPENDED READY MESSAGE
my-release 6.1.1 False True Release reconciliation succeeded
Port forward again, and take a look.
Back to blue as planned.
kind delete cluster
And if you wish, feel free to delete your demo GitHub repo.
π‘ Or not! These commands are idempotent, so you can feel free to keep your repo. In factβ¦ let's try it!
Want to see how Flux handles your Helm release in a disaster recovery scenario?
Let's simulate total cluster failure by just deleting it π΅:
kind delete cluster
We can create a new one by repeating step 4 (kind create cluster
).
Then just need to install Flux components into the new cluster by repeating the flux bootstrap
command from step 5.
π‘ Because we still have our desired state defined in the Git repo we specify in
flux bootstrap
, reconciliation will happen automatically. Our Helm release should now match what we've defined in Git, as the source of truth!
π You'll notice the Helm metadata revision is back to
1
, because that is only useful as in-cluster storage. New cluster, revisions start anew.
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release default 1 2022-02-17 06:56:15.5594991 +0000 UTC deployed podinfo-6.0.3 6.0.3
And there we have it!
- On a local
kind
cluster, we simulated an existing Helm release using the Helm CLI you're already familiar with (helm install
) - We used Flux CLI to bootstrap Flux components into the cluster, and simultaneously define and create (if it didn't already exist) a properly formatted Git repo containing the bootstrap manifests
- Used Flux CLI to easily create Custom Resources for the Helm repo and release, along with our existing release's custom values
- Pushed the files to Git, and show Flux labels which means it has taken ownership of managing your existing Helm release
- Proved this by making changes to Git only, and watch Flux magically update your Helm release from Git
- Showed how to pause and resume the automated continuous reconciliation on a single HelmRelease, which you might use during in-cluster development or incident management
- Simulated disaster recovery of your Helm release by deleting your entire cluster. Bootstrapping Flux again was all we needed to get your system and Helm-released apps running again
yes or put the helm resources in the "flux-system" namespace to make command in step 10 work