Skip to content

Instantly share code, notes, and snippets.

@prabhakaran-jm
Last active February 9, 2025 17:00
Show Gist options
  • Select an option

  • Save prabhakaran-jm/c174ddfbe32f28cc12c9b5c4d8054dca to your computer and use it in GitHub Desktop.

Select an option

Save prabhakaran-jm/c174ddfbe32f28cc12c9b5c4d8054dca to your computer and use it in GitHub Desktop.
ARGO_SETUP
ARGO CD Installation Guide
Prerequisites:
1. Basic understanding of Docker, Kubernetes, and CLI.
2. Access to a computer with an internet connection.
3. A running Kubernetes cluster.
Deploy Argo CD to Kubernetes
Create a namespace for Argo:
kubectl create namespace argocd
Deploy ArgoCD using the quick start manifest:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Verify that Argo CD is installed by running the following command:
kubectl get pods -n argocd
Login via UI:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Optional: If you want to automatically start Argo CD UI when you power on your laptop, please use this:
cat <<EOF > ~/Library/LaunchAgents/com.example.argocd-port-forward.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.example.argocd-port-forward</string>
<key>ProgramArguments</key>
<array>
<string>/opt/homebrew/bin/kubectl</string>
<string>port-forward</string>
<string>svc/argocd-server</string>
<string>-n</string>
<string>argocd</string>
<string>8080:443</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/argocd-port-forward.out</string>
<key>StandardErrorPath</key>
<string>/tmp/argocd-port-forward.err</string>
</dict>
</plist>
EOF
launchctl load ~/Library/LaunchAgents/com.example.argocd-port-forward.plist
launchctl list | grep com.example.argocd-port-forward
Argo CD username is admin and the password is stored as a Kubernetes secret which can be retrieved with the following command:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Login via CLI:
brew install argocd
argocd login localhost:8080
After Deploying example-app on UI:
cat <<EOF > ~/Library/LaunchAgents/com.user.kubectl-port-forward.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.user.kubectl-port-forward</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/kubectl</string>
<string>port-forward</string>
<string>svc/argocd-example-app-service</string>
<string>9090:3000</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/kubectl-port-forward.out</string>
<key>StandardErrorPath</key>
<string>/tmp/kubectl-port-forward.err</string>
</dict>
</plist>
EOF
launchctl load ~/Library/LaunchAgents/com.user.kubectl-port-forward.plist
Configure Authentication and Authorization
1. Create a new user
Open the ConfigMap for editing:
kubectl edit cm argocd-cm -n argocd
At the end of file add:
data:
accounts.developer: login
The user “Developer” has now been created. To log in, it needs a password. You can configure it by using the Argo CD CLI. Install it by running:
argocd account update-password --account developer --new-password Developer123
Open the ConfigMap for editing RBAC rules with this command:
kubectl edit cm argocd-rbac-cm -n argocd
At the end of the file add:
data:
policy.csv: |
p, role:synconly, applications, sync, */*, allow
g, developer, role:synconly
policy.default: "role:readonly"
----------
Install Argo Workflows
Deploy Argo Workflows
Create a namespace for Argo Workflows:
kubectl create namespace argo
Deploy Argo Workflows using the quick start manifest:
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.6.2/install.yaml
Verify that Argo Workflows is installed by running the following command:
kubectl -n argo get pods
Accessing UI Dashboard
Patch argo-server authentication
kubectl patch deployment argo-server -n argo --type='json' -p '[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": ["server", "--auth-mode=server"]}]'
Port forward the UI:
kubectl -n argo port-forward deployment/argo-server 2746:2746
cat <<EOF | tee ~/Library/LaunchAgents/com.user.kubectl-port-forward-argo.plist > /dev/null && launchctl load ~/Library/LaunchAgents/com.user.kubectl-port-forward-argo.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.user.kubectl-port-forward-argo</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/kubectl</string>
<string>port-forward</string>
<string>-n</string>
<string>argo</string>
<string>deployment/argo-server</string>
<string>2746:2746</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/kubectl-port-forward-argo.out</string>
<key>StandardErrorPath</key>
<string>/tmp/kubectl-port-forward-argo.err</string>
</dict>
</plist>
EOF
launchctl load ~/Library/LaunchAgents/com.user.kubectl-port-forward-argo.plist
UI can be accessed on https://localhost:2746/
Install the Argo Workflows CLI
Add the Argo Tap:
brew tap argoproj/tap
Install the Argo Workflows CLI:
brew install argoproj/tap/argo
After installation, you can verify that the CLI is correctly installed by checking its version:
argo version
Optional: Enable Shell auto-completion
source <(argo completion bash)
argo completion zsh > "${HOME}/.argo-completion.zsh"
Here is a list of the most common commands to operate with Argo Workflows:
argo list # list workflows
argo get <workflow-name> # display details about a workflow
argo submit <workflow-file.yaml> # submit a workflow
argo watch <workflow-name> # watch workflow progress
argo template list # list workflow templates
argo logs <workflow-name> # display logs for workflow pod(s)
Argo Workflow Template that runs a container. Run the following command to create a manifest called dag-workflow-template.yaml:
cat <<EOF > dag-workflow-template.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: echo-template
namespace: argo
spec:
serviceAccountName: argo
templates:
- name: echo
inputs:
parameters:
- name: message
container:
image: alpine:3.7
command: [echo, "{{inputs.parameters.message}}"]
EOF
Create a DAG workflow that uses the template and defines the relationships between tasks.
Save this workflow in a file called, dag-workflow.yaml as shown below:
cat <<EOF > dag-workflow.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: dag-diamond-
namespace: argo
spec:
entrypoint: diamond
templates:
- name: diamond
dag:
tasks:
- name: A
templateRef:
name: echo-template
template: echo
arguments:
parameters:
- name: message
value: "A"
- name: B
dependencies: [A]
templateRef:
name: echo-template
template: echo
arguments:
parameters:
- name: message
value: "B"
- name: C
dependencies: [A]
templateRef:
name: echo-template
template: echo
arguments:
parameters:
- name: message
value: "C"
- name: D
dependencies: [B, C]
templateRef:
name: echo-template
template: echo
arguments:
parameters:
- name: message
value: "D"
EOF
To successfully deploy this Workflow we need to temporarily grant admin permissions to argo Service Account as follows:
kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=argo:default -n argo
create the Workflow Template:
kubectl apply -f dag-workflow-template.yaml
submit the DAG workflow to your Kubernetes cluster:
kubectl create -f dag-workflow.yaml
Check whether the Workflow Template and Workflow resources have been created:
kubectl -n argo get workflowtemplates.argoproj.io
kubectl -n argo get workflow
OR
argo -n argo list
argo -n argo get dag-diamond-x4xmj
argo -n argo logs dag-diamond-x4xmj
Create a CI/CD Pipeline with Argo Workflow
create namespace:
kubectl create ns argo-workflows
create a specific service account for the CI/CD workflow:
kubectl -n argo-workflows create sa argo-workflow-sa
create a new RoleBinding to bind the Role to this new service account:
cat <<EOF | kubectl -n argo-workflows apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-workflow-sa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-workflow-role
subjects:
- kind: ServiceAccount
name: argo-workflow-sa
namespace: argo-workflows
EOF
Create workflow-ci.yaml
cat <<EOF > workflow-ci.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: python-app
spec:
serviceAccountName: argo-workflow-sa
entrypoint: python-app
volumes:
- name: test-volume
hostPath:
path: /path/to/tests
templates:
- name: python-app
steps:
- - name: build
template: build
- - name: test
template: test
- - name: deploy
template: deploy
- name: build
container:
image: python:3.11
command: [python]
args: ["-c", "print('build')"]
- name: test
container:
image: python:3.11
command: [python]
args: ["-m", "unittest", "discover", "-s", "/app/tests"]
volumeMounts:
- name: test-volume
mountPath: /app/tests
- name: deploy
container:
image: python:3.11
command: [python]
args: ["-c", "print('deploy')"]
EOF
create argo-workflow-role:
cat <<EOF | kubectl -n argo-workflows apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-workflow-role
rules:
- apiGroups:
- "argoproj.io"
resources:
- workflows
- workflowtaskresults
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
EOF
Deploy the workflow by running the following command:
kubectl -n argo-workflows apply -f workflow-ci.yaml
argo get command to retrieve information about the workflow, including its status:
argo -n argo-workflows get python-app
View the logs:
argo -n argo-workflows logs python-app
---------------
ARGO ROLLOUTS:
Create a namespace for Argo Rollouts:
kubectl create namespace argo-rollouts
Deploy Argo Rollouts using the quick start manifest:
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/download/v1.6.4/install.yaml
Verify that Argo Rollouts is installed by running the following command:
kubectl get pods -n argo-rollouts
Install Rollouts kubectl Plugin
brew install argoproj/tap/kubectl-argo-rollouts
Verify that the argo CLI is installed correctly by running the following command:
kubectl argo rollouts version
UI Dashboard:
kubectl argo rollouts dashboard
Optional: (Run Argo Rollouts UI in background)
nohup kubectl argo rollouts dashboard > argo_dashboard.log 2>&1 &
Creating Blue-Green Deployments with Argo Rollouts
check for existing rollouts:
kubectl get rollout
create new rollout:
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-bluegreen
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollout-bluegreen
template:
metadata:
labels:
app: rollout-bluegreen
spec:
containers:
- name: rollouts-demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- containerPort: 8080
strategy:
blueGreen:
activeService: rollout-bluegreen-active
previewService: rollout-bluegreen-preview
autoPromotionEnabled: false
EOF
Check whether the Rollout resource has been created.
kubectl get rollout
get status information of a specific rollout:
kubectl argo rollouts get ro rollout-bluegreen
make sure that the named services are available
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: rollout-bluegreen-active
name: rollout-bluegreen-active
spec:
ports:
- name: "80"
port: 80
protocol: TCP
targetPort: 80
selector:
app: rollout-bluegreen
type: ClusterIP
status:
loadBalancer: {}
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: rollout-bluegreen-preview
name: rollout-bluegreen-preview
spec:
ports:
- name: "80"
port: 80
protocol: TCP
targetPort: 8080
selector:
app: rollout-bluegreen
type: ClusterIP
EOF
If we now check our rollout, we’ll see a Healthy status:
kubectl argo rollouts get ro rollout-bluegreen
perform a version upgrade using the blue-green method. Therefore, we’ll adjust our image to deploy argoproj/rollouts-demo:green instead of blue:
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-bluegreen
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollout-bluegreen
template:
metadata:
labels:
app: rollout-bluegreen
spec:
containers:
- name: rollouts-demo
image: argoproj/rollouts-demo:green
imagePullPolicy: Always
ports:
- containerPort: 8080
strategy:
blueGreen:
activeService: rollout-bluegreen-active
previewService: rollout-bluegreen-preview
autoPromotionEnabled: false
EOF
kubectl argo rollouts get ro rollout-bluegreen
Let's investigate the rollout a little further and check replicasets:
kubectl get replicaset
Argo rollout created a second replicaset, which is used to manage the different pod versions.
Lets promote the new version:
kubectl argo rollouts promote rollout-bluegreen
kubectl argo rollouts get ro rollout-bluegreen
kubectl describe svc rollout-bluegreen-active
Let’s assume we want to roll back from the new green to the old blue image:
kubectl argo rollouts undo rollout-bluegreen
kubectl argo rollouts get ro rollout-bluegreen
kubectl argo rollouts promote rollout-bluegreen
kubectl argo rollouts get ro rollout-bluegreen
kubectl describe svc rollout-bluegreen-active
Clean Up Resources
kubectl delete rollout rollout-bluegreen
kubectl delete svc rollout-bluegreen-active
kubectl delete svc rollout-bluegreen-preview
Transitioning to Argo Rollouts
create an NGINX deployment
kubectl create deploy nginx-deployment --image=nginx --replicas=3
Convert Deployment to Rollout
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: nginx-rollout
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deployment
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: 10s
EOF
kubectl get ro,deploy,po
Scale Down Deployment
kubectl scale deployment/nginx-deployment --replicas=0
Clean Up Resources
kubectl delete rollout nginx-rollout
kubectl delete deployment nginx-deployment
------------------
ARGO EVENTS
Install and Use Argo Events
Because we trigger a workflow we need to install Argo Workflows first
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.5.2/install.yaml
kubectl patch deployment argo-server -n argo --type='json' -p '[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": ["server", "--auth-mode=server"]}]'
kubectl-n argo port-forward deployment/argo-server 2746:2746
Install Argo Events and the Needed Components
kubectl create namespace argo-events
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install.yaml
Apply a validating webhook for Argo Events:
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install-validating-webhook.yaml
For setting up a native EventBus in the 'argo-events' namespace, which handles event transportation in Argo Events, apply the configuration with this command:
kubectl -n argo-events apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/eventbus/native.yaml
Next we need to define an EventSource configuration that listens for webhook events in Argo Events, apply the following configuration using this command:
kubectl -n argo-events apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/event-sources/webhook.yaml
For the Sensor to properly interact with Kubernetes resources, apply the necessary RBAC policies:
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/master/examples/rbac/sensor-rbac.yaml
Similarly, apply RBAC policies for Workflows to ensure they have the necessary permissions in Kubernetes:
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/master/examples/rbac/workflow-rbac.yaml
Set up a Sensor to trigger workflows based on webhook events by applying this Sensor configuration:
kubectl -n argo-events apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/sensors/webhook.yaml
Expose the event-source pod via port forwarding to consume requests over HTTP:
kubectl -n argo-events port-forward $(kubectl -n argo-events get pod -l eventsource-name=webhook -o name) 12000:12000 &
Finally, simulate an external event that triggers the workflow. Send a test webhook event to the Event Source with this curl command:
curl -d '{"message":"this is my first webhook"}' -H "Content-Type: application/json" -X POST "http://localhost:12000/example"
Use Apache Pulsar with Argo Events
Deploy Apache Pulsar in your cluster with:
kubectl -n argo-events apply -f https://raw.githubusercontent.com/lftraining/LFS256-code/main/argoevents/pulsar.yaml
Check the status of the Pulsar pod and note the name:
kubectl get pods -n argo-events
port forward the Pulsar pod to enable direct communication between your local machine and the Pulsar service running in the Kubernetes cluster
kubectl -n argo-events port-forward pulsar-5b757889bb-nmd9n 6650:6650
Set up the event source for Argo Events to listen to Pulsar messages. This configures Argo Events to connect and listen to Pulsar:
kubectl -n argo-events apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/event-sources/pulsar.yaml
Deploy the sensor for reacting to Pulsar events. This sets up the actions to be taken in response to events detected by Argo Events:
kubectl -n argo-events apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/sensors/pulsar.yaml
Now, everything is set up to trigger the event. To interact with the Pulsar pod, use:
kubectl -n argo-events exec -it pulsar-5b757889bb-nmd9n -- /bin/bash
Inside the Pulsar pod, navigate to the bin directory and send a test message:
cd bin
./pulsar-client produce test--messages "Test"
--------------------
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment