Instructions for installing Flux on an OKE Cluster. Start by installing the flux cli tool on the Cloud Shell which has no sudo access and a downrev version of go. The standard instructions for using the flux cli install script cannot be followed exactly but with some tweaks we can make it work for Oracle Cloud
Thanks to @scottrigby for sharing their original version of this with me!
- Install Flux CLI
- Make Personal Access Token for creating repositories
- Export env vars locally
- Create local demo cluster
- Simple bootstrap
- Clone the newly created git repo to your local workspace
- Lets create a Helm release the most common way, using the Helm CLI
- Now lets convert these to declarative CRs that Flux understands
- Lets go ahead and push this to Git
- Lets check out the magic
- Change a new Helm release value through Git
- Pause and resume
- Cleanup demo cluster π§Ή
- Disaster recovery β
- Wrap up
At time of writing you need to ...
go install golang.org/dl/go1.23.1@latest
go1.23.1 download
go1.23.1 version
Add the following dirs to your PATH env var:
echo 'export PATH=$PATH:/usr/share/gocode/bin:$HOME/bin' >> ~/.bashrc
source ~/.bashrc
mkdir flux
curl -s https://fluxcd.io/install.sh > install-cli.sh
did this manually
An installation directory $HOME/bin/ is passed in as the first parameter to the flux install script it uses that instead of the default which is /usr/local/bin
source ./install-cli.sh $HOME/bin/
flux --version
flux version 2.4.0
For a Quality of Life / User Experience boost, you can setup shell completion for flux as follows:
mkdir $HOME/.flux
then in your .bashrc include the following
# Check if flux completion file exists, create it if not, and source it
if [ ! -f ~/.flux/completion.bash ]; then
echo "flux completion bash > ~/.flux/completion.bash"
flux completion bash > ~/.flux/completion.bash
fi
echo "sourcing ~/.flux/completion.bash"
source ~/.flux/completion.bash
export PATH=$PATH:/usr/share/gocode/bin:$HOME/bin
Then running flux TAB TAB
should produce
flux TAB TAB
bootstrap completion diff export install pull resume tag uninstall
build create envsubst get list push stats trace version
check delete events help logs reconcile suspend tree
- Generate new token in dev settings
- Check all permissions under repo & save
- Copy PAT to buffer
I've done this in advance for now.
π‘ If you want to show during a demo, follow best security practices by making the PAT off camera - or copy from a secure password app on camera etcΒ β then read to var silently
read -s
, then export var.
$ export GITHUB_TOKEN=[paste PAT]
$ echo -n $GITHUB_TOKEN | wc -c
40
See Access Cluster on the Oracle Cloud Console
π‘ The more complex your org is, the more complex your directory structure and patterns usually are.
There is no gold standard.
Flux is not opinionated about how directories are structured, rather it tries to be as flexible as possible to accommodate different patterns.
$ flux bootstrap github \
--interval 10s \
--owner cncf --personal \
--repository automation \
--branch main \
--path=ci/clusters/staging
βΊ
(took 1m3s)
$ cd ~/code/github.com/scottrigby \
&& git clone [email protected]:scottrigby/flux-for-helm-users.git \
&& cd flux-for-helm-users
$ tree
.
βββ clusters
βββ dev
βββ flux-system
βββ gotk-components.yaml
βββ gotk-sync.yaml
βββ kustomization.yaml
3 directories, 3 files
Remember that we set custom values. We will get back to this later.
helm repo add podinfo https://stefanprodan.github.io/podinfo
Lets set some values to make this fun.
π‘ Helm CLI is great to show all the available options in a chart:
$ helm show values podinfo --repo https://stefanprodan.github.io/podinfo
# Default values for podinfo.
replicaCount: 1
logLevel: info
ui:
color: "#34577c"
message: ""
logo: ""
etcβ¦
$ helm upgrade -i my-release podinfo/podinfo \
--set replicaCount=2 \
--set logLevel=debug \
--set ui.color='red'
Release "my-release" does not exist. Installing it now.
β¦
Create a Source Custom Resource locally
π‘ The Helm CLI reads your locally defined Helm repo info (created in step 7). But the Flux Helm controller in your cluster will also need this same info.
We'll tell Flux about the Helm repo info with a HelmRepository
CR representing a Flux source.
Instead of helm add repo
you can use flux create source helm
to export the CRD to a local file:
$ flux create source helm podinfo \
--url=https://stefanprodan.github.io/podinfo \
--namespace=default \
--export > clusters/dev/source-helmrepo-podinfo.yaml
$ cat clusters/dev/source-helmrepo-podinfo.yaml
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: podinfo
namespace: default
spec:
interval: 1m0s
url: https://stefanprodan.github.io/podinfo
Next we'll create a HelmRelease
Custom Resource locally, using the same Helm values we earlier specified with the Helm CLI.
π‘ Helm CLI makes it very easy to get the values we earlier set for the release. We'll first export these to a file then take a look at its contents:
$ helm get values my-release -oyaml > my-values.yaml
$ cat my-values.yaml
logLevel: debug
replicaCount: 2
ui:
color: red
And again Flux CLI makes it easy to create the CR. You may also do this by hand, or with an IDE (for example with the VSCode Flux plugin), but the CLI command eases this:
$ flux create helmrelease my-release \
--release-name=my-release \
--source=HelmRepository/podinfo \
--chart=podinfo \
--chart-version=">4.0.0" \
--namespace=default \
--values my-values.yaml \
--export > ./clusters/dev/podinfo-helmrelease.yaml
$ cat clusters/dev/podinfo-helmrelease.yaml
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: my-release
namespace: default
spec:
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
version: '>4.0.0'
interval: 1m0s
values:
logLevel: debug
replicaCount: 2
ui:
color: red
We now no longer need the temporary values file. Lets be tidy:
rm my-values.yaml
$ git add clusters/dev
$ git commit -m 'Configure podinfo Helm Repo source and Helm Release'
$ git push
β¦
π‘ From this point on, you are now doing GitOps.
We can verify that Flux is now managing this Helm release.
π‘ If you want to immediately trigger reconciliation on a local demo cluster you can manually call
flux reconcile
. We shouldn't need to trigger that manually in this demo because we set the interval to 10s.In real word clusters there are important use cases for setting up webhook receivers to automate this immediacy, and there are equally important use cases for letting your defined sync interval run its course.
flux reconcile helmrelease my-release
Flux will add labels
$ kubectl get deploy my-release-podinfo -oyaml | grep flux
helm.toolkit.fluxcd.io/name: my-release
helm.toolkit.fluxcd.io/namespace: default
You believe me that we are now doing GitOps, but let's prove it.
Change a value in your HelmRelease
CR:
$ yq -i '.spec.values.ui.color = "blue"' clusters/dev/podinfo-helmrelease.yaml
$ git add clusters/dev
$ git diff --staged
diff --git a/clusters/dev/podinfo-helmrelease.yaml b/clusters/dev/podinfo-helmrelease.yaml
index b58eed2..5e1dc10 100644
--- a/clusters/dev/podinfo-helmrelease.yaml
+++ b/clusters/dev/podinfo-helmrelease.yaml
@@ -17,5 +17,4 @@ spec:
logLevel: debug
replicaCount: 2
ui:
- color: red
+ color: blue
$ git commit -m "blue me"
$ git push
We can see our Helm release incremented the revision:
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release default 3 2022-02-17 06:16:42.2293519 +0000 UTC deployed podinfo-6.0.3 6.0.3
And that the new release revision applied our change:
$ helm diff revision my-release 2 3
env:
- name: PODINFO_UI_COLOR
- value: red
+ value: blue
image: ghcr.io/stefanprodan/podinfo:6.0.3
imagePullPolicy: IfNotPresent
Let's get visual:
$ kubectl -n default port-forward deploy/my-release-podinfo 8080:9898
Forwarding from 127.0.0.1:8080 -> 9898
Browse to http://localhost:8080
Er mah gerd, it's blue!
Let's pretend we're in incident management and want to use Helm rollback
π‘ It's worth noting that Flux
HelmRelease
retains Helm release metadata and Helm's ability to manage the releases directly.There are various benefits to this, including the ability to continue using your favorite development tools that integrate with Helm releases (such as
helm list
,helm diff
plugin, etc).This is also helpful in production. For example, there are legitimate use cases for pausing GitOps operations and temporarily using the Helm CLI, such as incident management. Pausing and resuming GitOps reconciliation may be done on a per Custom Resource basis without affecting the others, for example a single
HelmRelease
:
$ flux suspend helmrelease my-release --namespace default
βΊ suspending helmreleases my-release in default namespace
β helmreleases suspended
Flux CLI has a handy flux get
feature, that gives additional info in output including whether or not reconciliation is suspended for a resource. Here we can see SUSPENDED
is True
.
$ flux get hr my-release --namespace default
NAME REVISION SUSPENDED READY MESSAGE
my-release 6.1.1 True True Release reconciliation succeeded
Let's rollback
to red using the Helm CLI, to show that it works.
$ helm rollback my-release 2
Rollback was a success! Happy Helming!
We can port forward again see that it worked:
$ kubectl -n default port-forward deploy/my-release-podinfo 8080:9898
Forwarding from 127.0.0.1:8080 -> 9898
OK yay, back to red! π
Once we're finished with our incident management window and want to resume GitOps reconciliation on that resource, we just need to resume again:
$ flux resume helmrelease my-release --namespace default
βΊ resuming helmreleases my-release in default namespace
β helmreleases resumed
β waiting for HelmRelease reconciliation
β HelmRelease reconciliation completed
β applied revision 6.0.3
We can see SUSPENDED
is False
, which means reconciliation has resumed.
flux get hr my-release --namespace default
NAME REVISION SUSPENDED READY MESSAGE
my-release 6.1.1 False True Release reconciliation succeeded
Port forward again, and take a look.
Back to blue as planned.
kind delete cluster
And if you wish, feel free to delete your demo GitHub repo.
π‘ Or not! These commands are idempotent, so you can feel free to keep your repo. In factβ¦ let's try it!
Want to see how Flux handles your Helm release in a disaster recovery scenario?
Let's simulate total cluster failure by just deleting it π΅:
kind delete cluster
We can create a new one by repeating step 4 (kind create cluster
).
Then just need to install Flux components into the new cluster by repeating the flux bootstrap
command from step 5.
π‘ Because we still have our desired state defined in the Git repo we specify in
flux bootstrap
, reconciliation will happen automatically. Our Helm release should now match what we've defined in Git, as the source of truth!
π You'll notice the Helm metadata revision is back to
1
, because that is only useful as in-cluster storage. New cluster, revisions start anew.
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release default 1 2022-02-17 06:56:15.5594991 +0000 UTC deployed podinfo-6.0.3 6.0.3
And there we have it!
- On a local
kind
cluster, we simulated an existing Helm release using the Helm CLI you're already familiar with (helm install
) - We used Flux CLI to bootstrap Flux components into the cluster, and simultaneously define and create (if it didn't already exist) a properly formatted Git repo containing the bootstrap manifests
- Used Flux CLI to easily create Custom Resources for the Helm repo and release, along with our existing release's custom values
- Pushed the files to Git, and show Flux labels which means it has taken ownership of managing your existing Helm release
- Proved this by making changes to Git only, and watch Flux magically update your Helm release from Git
- Showed how to pause and resume the automated continuous reconciliation on a single HelmRelease, which you might use during in-cluster development or incident management
- Simulated disaster recovery of your Helm release by deleting your entire cluster. Bootstrapping Flux again was all we needed to get your system and Helm-released apps running again