All of the actions are executed against minikube
.
- Configmap should be deployed as 1-st
- Secret should be deployed as 2-nd
- Docker reg secret should be deployed as 3-nd
- DB migration job needs to be deployed as 4-th
- Assets pre-processor job should be deployed as 5-th
- Deployment as the last one
- Configmap
annotations:
"helm.sh/hooks": pre-upgrade
"helm.sh/hook-weight": "0"
- Secret
annotations:
"helm.sh/hooks": pre-upgrade
"helm.sh/hook-weight": "1"
- Docker reg secret
annotations:
"helm.sh/hooks": pre-upgrade
"helm.sh/hook-weight": "2"
- DB migration job
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "4"
"helm.sh/hook-delete-policy": hook-succeeded
- Assets pre-processor job
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
$ helm lint .
By adding
--debug
and--dry-run
to upgrade helm will upload templates to tiller and tiller will compile them without sending to kubernetes. Compiled templates will be returned to the client's stdout.
This is really useful to chatch basic typos in templates.
$ helm upgrade \
--namespace feature \
--install \
--set version=0.102.0-feature \
--set inline.secret.mysql_password=**** \
--set inline.secret.redis_password=**** \
--set inline.secret.encryption_key=**** \
--set inline.secret.encryption_iv=**** \
--set inline.secret.jwt_secret=**** \
--set inline.secret.elasticsearch_password=**** \
--set inline.config.redis_cache_ttl=150 \
-f values.yaml \
-f values-feature.yaml \
--debug
--dry-run
feature .
Note: All secrets are coming from Jenkins credentials as inline params. Here we are deploying to feature environment.
Still --debug
will be kept.
$ helm upgrade \
--namespace feature \
--install \
--set inline.secret.mysql_password=**** \
--set inline.secret.redis_password=**** \
--set inline.secret.encryption_key=**** \
--set inline.secret.encryption_iv=**** \
--set inline.secret.jwt_secret=**** \
--set inline.secret.elasticsearch_password=**** \
--set inline.config.redis_cache_ttl=150 \
-f values.yaml \
-f values-feature.yaml \
--debug \
feature .
Get tiller pod
$ kubectl get pods --all-namespaces | grep tiller
Get tiller pod logs
Usually I'll clean terminal buffer a moment before executing helm upgrade
$ kubectl logs tiller-deploy-c48485567-4g74s \
--namespace kube-system \
--follow
First message to confirm that all pre-upgrade hooks are going to be executed is:
[tiller] 2019/06/16 16:42:50 executing 5 pre-upgrade hooks for feature
After that messages about hooks in correct order defined into "helm.sh/hook-weight": "0"
should start appearing into log output:
[tiller] 2019/06/16 16:42:50 deleting pre-upgrade hook backend-env for release feature due to "before-hook-creation" policy
[kube] 2019/06/16 16:42:50 Starting delete for "backend-env" ConfigMap
[kube] 2019/06/16 16:42:50 building resources from manifest
[kube] 2019/06/16 16:42:50 creating 1 resource(s)
[kube] 2019/06/16 16:42:50 Watching for changes to ConfigMap backend-env with timeout of 5m0s
[kube] 2019/06/16 16:42:50 Add/Modify event for backend-env: ADDED
The similar output should appear for Secret resources at 2-nd and 3-th place.
Executing DB migration Job and same output should appear for assets pre-processing Job:
[kube] 2019/06/16 16:42:50 building resources from manifest
[kube] 2019/06/16 16:42:50 creating 1 resource(s)
[kube] 2019/06/16 16:42:50 Watching for changes to Job db-migration-0.102.0-feature-qjzypho9od with timeout of 5m0s
[kube] 2019/06/16 16:42:50 Add/Modify event for db-migration-0.102.0-feature-qjzypho9od: ADDED
[kube] 2019/06/16 16:42:50 db-migration-0.102.0-feature-qjzypho9od: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
[kube] 2019/06/16 16:42:50 Add/Modify event for db-migration-0.102.0-feature-qjzypho9od: MODIFIED
[kube] 2019/06/16 16:42:50 db-migration-0.102.0-feature-qjzypho9od: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
[kube] 2019/06/16 16:45:36 Add/Modify event for db-migration-0.102.0-feature-qjzypho9od: MODIFIED
[kube] 2019/06/16 16:45:36 building resources from manifest
All pre-upgrade hooks are executed:
[tiller] 2019/06/16 16:45:43 hooks complete for pre-upgrade feature
Cleanup of jobs after successful completion:
[tiller] 2019/06/16 16:45:43 deleting pre-upgrade hook db-migration-0.102.0-feature-qjzypho9od for release feature due to "hook-succeeded" policy
[kube] 2019/06/16 16:45:43 Starting delete for "db-migration-0.102.0-feature-qjzypho9od" Job
[tiller] 2019/06/16 16:45:43 deleting pre-upgrade hook css-build-0.102.0-feature-4aw6abobwa for release feature due to "hook-succeeded" policy
[kube] 2019/06/16 16:45:43 Starting delete for "css-build-0.102.0-feature-4aw6abobwa" Job
[kube] 2019/06/16 16:45:43 building resources from updated manifest
- Configmap and secrets aren't created and jobs are failing as of ref to configmap
- Chart upgrade fails as configmap already exists
Configmap and secrets aren't created and jobs are failing as of ref to configmap.
In this case, probably the number from the message will not match what is expected:
[tiller] 2019/06/16 16:42:50 executing 5 pre-upgrade hooks for feature
In my case, it was 2
instead of 5
.
Good to know is that helm
is not validating values so everything in the templates should be double checked.
Annotations are a special case as they are free-form and aren't defined by any schema against it can be validated on kubernetes
side either.
I was a victim of this as I've copied/pasted wrong definition for the hook from a bad template which I've inherited from the project.
Wrong: (hooks plural which is wrong)
"helm.sh/hooks": pre-upgrade
Correct: (hook not plural which is correct)
"helm.sh/hook": pre-upgrade
Chart upgrade fails as configmap already exists.
This is a tricky issue as pre-upgrade hooks look to create resource not to update it. At least this is behavioral conclusion I've made as no trace of this in official docs.
After some research and playing here and there I found that delete resource hook will solve the issue:
"helm.sh/hook-delete-policy": before-hook-creation
And it is but this drive us to another architectural pattern:
EACH NAMESPACE SHOULD CONTAIN ALL RESOURCES REQUIRED BY JOBS / DEPLOYMENTS
If resource from e.g. default
namespace is use this will not work.
Personally, I'm following explicit approach to and keep namespaces as an independant package so this works perfectly for me.
I know that some people like to have some secrets centralized but in this case, probably for them, kubernetes
custom operators will work better.
Here is one good project to help a bit with operators:
https://medium.com/flant-com/kubernetes-shell-operator-76c596b42f23
- Configmap
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": before-hook-creation
- Secret
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": before-hook-creation
- Docker reg secret
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "2"
"helm.sh/hook-delete-policy": before-hook-creation
- DB migration job
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "4"
"helm.sh/hook-delete-policy": hook-succeeded
- Assets pre-processor job
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded