-
-
Save vfarcic/28e2adb5946ca366d7845780608591d7 to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/28e2adb5946ca366d7845780608591d7 | |
########################################################### | |
# Argo Workflows & Pipelines # | |
# CI/CD, Machine Learning, and Other Kubernetes Workflows # | |
# https://youtu.be/UMaivwrAyTA # | |
########################################################### | |
# Referenced videos: | |
# - Argo CD - Applying GitOps Principles To Manage Production Environment In Kubernetes: https://youtu.be/vpWQeoaiRM4 | |
# - Argo Events - Event-Based Dependency Manager for Kubernetes: https://youtu.be/sUPkGChvD54 | |
# - Argo Rollouts - Canary Deployments Made Easy In Kubernetes: https://youtu.be/84Ky0aPbHvY | |
# - Kaniko - Building Container Images In Kubernetes Without Docker: https://youtu.be/EgwVQN6GNJg | |
######### | |
# Setup # | |
######### | |
# It can be any Kubernetes cluster | |
minikube start | |
minikube addons enable ingress | |
git clone https://github.com/vfarcic/argocd-production.git | |
cd argocd-production | |
export REGISTRY_SERVER=https://index.docker.io/v1/ | |
# Replace `[...]` with the registry username | |
export REGISTRY_USER=[...] | |
# Replace `[...]` with the registry password | |
export REGISTRY_PASS=[...] | |
# Replace `[...]` with the registry email | |
export REGISTRY_EMAIL=[...] | |
kubectl create namespace workflows | |
kubectl --namespace workflows \ | |
create secret \ | |
docker-registry regcred \ | |
--docker-server=$REGISTRY_SERVER \ | |
--docker-username=$REGISTRY_USER \ | |
--docker-password=$REGISTRY_PASS \ | |
--docker-email=$REGISTRY_EMAIL | |
# If NOT using minikube, change the value to whatever is the address in your cluster | |
export ARGO_WORKFLOWS_HOST=argo-workflows.$(minikube ip).nip.io | |
cat argo-workflows/base/ingress_patch.json \ | |
| sed -e "[email protected]@$ARGO_WORKFLOWS_HOST@g" \ | |
| tee argo-workflows/overlays/production/ingress_patch.json | |
kustomize build \ | |
argo-workflows/overlays/production \ | |
| kubectl apply --filename - | |
kubectl --namespace argo \ | |
rollout status \ | |
deployment argo-server \ | |
--watch | |
cd .. | |
############# | |
# Workflows # | |
############# | |
git clone \ | |
https://github.com/vfarcic/argo-workflows-demo.git | |
cd argo-workflows-demo | |
cat workflows/silly.yaml | |
cat workflows/parallel.yaml | |
cat workflows/dag.yaml | |
############# | |
# Templates # | |
############# | |
cat workflows/cd-mock.yaml | |
cat workflow-templates/container-image.yaml | |
kubectl --namespace workflows apply \ | |
--filename workflow-templates/container-image.yaml | |
kubectl --namespace workflows \ | |
get clusterworkflowtemplates | |
######################## | |
# Submitting workflows # | |
######################## | |
cat workflows/cd-mock.yaml \ | |
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" \ | |
| tee workflows/cd-mock.yaml | |
argo --namespace workflows submit \ | |
workflows/cd-mock.yaml | |
argo --namespace workflows list | |
argo --namespace workflows \ | |
get @latest | |
argo --namespace workflows \ | |
logs @latest \ | |
--follow | |
open http://$ARGO_WORKFLOWS_HOST | |
kubectl --namespace workflows get pods |
It correctly edits the file after removing the space before vfarcic
and $REGISTRY_USER
cat workflows/cd-mock.yaml
| sed -e "s@value:vfarcic@value:$REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml
Other than that, i get a new error when i submit the workflow.
argo --namespace workflows submit workflows/cd-mock.yaml
FATA[2021-05-24T13:07:58.685Z] Failed to submit workflow: templates.full.tasks.build-container-image template reference container-image.build-kaniko-git not found
This might be related to my Kustomize installation. Im looking into it
That's strange since there is space between value:
and vfarcic
. Take a look at the following commands and the output:
export REGISTRY_USER=xyz
cat workflows/cd-mock.yaml \
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g"
The output:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: toolkit-
labels:
workflows.argoproj.io/archive-strategy: "false"
spec:
entrypoint: full
serviceAccountName: workflow
volumes:
- name: kaniko-secret
secret:
secretName: regcred
items:
- key: .dockerconfigjson
path: config.json
templates:
- name: full
dag:
tasks:
- name: build-container-image
templateRef:
name: container-image
template: build-kaniko-git
clusterScope: true
arguments:
parameters:
- name: app_repo
value: git://github.com/vfarcic/argo-workflows-demo
- name: container_image
value: xyz/devops-toolkit
- name: container_tag
value: "1.0.0"
- name: deploy-staging
template: echo
arguments:
parameters:
- name: message
value: Deploying to the staging cluster...
dependencies:
- build-container-image
- name: tests
template: echo
arguments:
parameters:
- name: message
value: Running integration tests (before, during, and after the deployment is finished)...
dependencies:
- build-container-image
- name: deploy-production
template: echo
arguments:
parameters:
- name: message
value: Deploying to the production cluster...
dependencies:
- tests
- name: echo
inputs:
parameters:
- name: message
container:
image: alpine
command: [echo]
args:
- "{{inputs.parameters.message}}"
You can see that the output now contains value: xyz/devops-toolkit
instead of value: vfarcic/devops-toolkit
.
I did not manage to coplete the tutorial.
For what its worth:
cat workflows/cd-mock.yaml
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g"
is working correctly.
While
cat workflows/cd-mock.yaml \
| sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" \
| tee workflows/cd-mock.yaml
is deleting the contents of the file.
If the first command works, the second should work as well since it is piping the output to tee
that writes it into the specified file (which happens to be the same one).
Would it help if we do a screen-sharing session and take a look at it together? If that sounds good, please pick any time that suits you from https://calendly.com/vfarcic/meet.
Thanks very much for your availability 😃 . I went ahead and almost completed it, so hopefully i wont steal much more of your time.
First of all, the following command worked as expected when i used another terminal..(my bad)..
cat workflows/cd-mock.yaml | sed -e "s@value: vfarcic@value: $REGISTRY_USER@g" | tee workflows/cdock.yaml
Apart from that, building with kustomize generates the following error ( which seems related to this issue kubernetes-sigs/kustomize#2538 )
kustomize build argo-workflows/overlays/production | kubectl apply --filename -
Error: accumulating resources: accumulation err='accumulating resources from '../../base': '/home/vagrant/argocd-production/argo-workflows/base' must resolve to a file': recursed accumulation of path '/home/vagrant/argocd-production/argo-workflows/base': accumulating resources: accumulation err='accumulating resources from 'github.com/argoproj/argo/manifests/base': evalsymlink failure on '/home/vagrant/argocd-production/argo-workflows/base/github.com/argoproj/argo/manifests/base' : lstat /home/vagrant/argocd-production/argo-workflows/base/github.com: no such file or directory': git cmd = '/snap/kustomize/28/usr/bin/git init': exit status 1
Another approach
I used the -k flag of kubectl for building. (since kustomize is now integrated in kubectl)
kubectl apply -k argo-workflows/overlays/production/
In order for this to work one must firstly create the namespace using the flag --save-config
like this:
kubectl create namespace workflows --save-config
Then i follow the next steps with success
I'm not using kubectl apply -k
because it has a very old version of Kustomize without any sign that it'll ever be updated. You could also try upgrading kubestomize
. I'm currently using 4+
.
There should be no need to create the workflows
Namespace separately. You can see that https://github.com/vfarcic/argocd-production/blob/master/argo-workflows/overlays/workflows/kustomization.yaml has namespace.yaml
as one of the resources. That file (https://github.com/vfarcic/argocd-production/blob/master/argo-workflows/overlays/workflows/namespace.yaml) is the manifest that defines the workflows
Namespace.
I follow the steps exactly, is there any reason that open http://$ARGO_WORKFLOWS_HOST render 502 bad gateway?
That can indicate Ingress not being installed, application not running, and quite a few other server-side issues. Can you do curl -i http://$ARGO_WORKFLOWS_HOST
and paste the output?
@vfarcic I figured out. According to Argo Workflow documentation, when creating Nginx Ingress, nginx.ingress.kubernetes.io/backend-protocol: https needs to be added to the annotations.
Also now it needs client token to access argo-server.
I use "kubectl -n argo exec (argo-server pod name) -- argo auth token" to generate token
Hi @vfarcic, Thanks for posting this, I have one issue where I get
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "namespace/username/test-repo:1.0.0": POST https://index.docker.io/v2/namespace/username/test-repo/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:namespace/username/test-repo Type:repository] map[Action:push Class: Name:namespace/username/test-repo Type:repository]]
I'm using oracle container registry which the username consists of a namespace and a username, I did setup my regcred correctly, though I can't seem to get it working with the template.
I have this value in the template
- name: container_image
value: namespace/username/test-repo
but it says https://index.docker.io/v2 instead of syd.ocir.io which is the registry in the regcred secret
@MazenElzanaty Can you confirm that the secret was created in the same namespace where Workflow build is running?
@vfarcic Yes, Actually I think the issue with kaniko itself.
Hi @vfarcic I got this error when I was submit the workflow to argo
toolkit-v4l9s-2218966670: Enumerating objects: 350, done.
Counting objects: 100% (68/68), done.jects: 1% (1/68)
Compressing objects: 100% (54/54), done.jects: 1% (1/54)
toolkit-v4l9s-2218966670: Total 350 (delta 28), reused 50 (delta 13), pack-reused 282
toolkit-v4l9s-2218966670: kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue
I already retried everything (deleting workflows
namespace and re-adding it), but it still doesn't work. Can you help me with this? Thanks! :)
@theodoreandrew I heard a similar complaint a week ago and, if I remember correctly, it was reproducible on Docker Desktop Kubernetes. Where are you running it?
@vfarcic Oh I also use Docker Desktop. I am using minikube as the VM. I am not sure if that's what you are asking since I am also a bit new to k8s
Can you try it on, let's say, Rancher Desktop. I'm using it exclusively for a while now (6 months approx) and haven't seen any issues in it. Also, it's been working find in "real" clusters like, for example, GKE and EKS.
If Rancher Desktop is not an option for you (even though I highly recommend it; watch https://youtu.be/evWPib0iNgY), I'll do my best to install whatever you're having and try to reproduce it. In that case, please let me know whether you're using Minikube or Docker Desktop. If it's minikube, please let me know which driver you're using (if it's the default one, you should see it in the output of minikube start
).
@vfarcic I just run minikube start
and I saw that it using docker driver
That (minikube with Docker) is the combination I heard others complaining. The workaround is to add --force
argument, at least until the "real" fix is done (if ever).
Independently of that issue, I strongly recommend switching to Rancher Desktop as a local Kubernetes cluster.
@vfarcic Would you please help me out to fix this issue. error: no kind "Workflow" is registered for version "argoproj.io/v1alpha1" in scheme "pkg/scheme/scheme.go:28"
Where did you observe that error?
@vfarcic I has seen in logs of the workflow pod that is created by sensors.
❯ kubectl get workflow -n argo
NAME STATUS AGE node-test-4cfkz Succeeded 17h
kubectl logs workflow/node-test-4cfkz -n argo
error: no kind "Workflow" is registered for version "argoproj.io/v1alpha1" in scheme "pkg/scheme/scheme.go:28"
I haven't experienced that error myself. I'll do my best to reproduce it and, if I do, figure out what to do. However, I'm traveling with limited available time so I can't confirm when I'll get to it.
@vfarcic when I try to log the crd I found similar type error. Would you please check.
Sorry for not responding earlier. I was (and still am) traveling with little to no free time. I'll do my best to double-check it soon.
@vfarcic Thanks for the Response, Enjoy the Trip
I follow the steps exactly, is there any reason that open http://$ARGO_WORKFLOWS_HOST render 502 bad gateway?
To avoid it, in argo-server ingress set the path to "/argo(/|$)(.*)", not as it is by design "/" (root directory). See https://argoproj.github.io/argo-workflows/argo-server/#ingress as reference.
I forgot to add the command that would change the registry used from
vfarcic
to whatever is your user. I just added https://gist.github.com/vfarcic/28e2adb5946ca366d7845780608591d7#file-57-argo-workflows-sh-L100. That should fix the problem. Can you try it out and let me know whether it worked?