- Build Project
- Deploy Monitoring Stack Operator
- Deploy an app that produces metrics
- Deploy a prometheus ServiceMonitor
- Deploy another metrics app
- Interact with Prometheus/Grafana
- Cleanup
Create the kind cluster locally
kind create cluster --image kindest/node:v1.22.4
Deploy the custom resource definitions
kubectl create -f deploy/crds/kubernetes
build/install operator sdk
make $(pwd)/tmp/bin/operator-sdk
./tmp/bin/operator-sdk olm install
Install other dependencies (prom/grafana)
kubectl apply -f deploy/dependencies
Run the operator locally
go run cmd/operator/main.go
NOTE When there are changes in pkg/apis/, run make generate to generate new crds
Theoreticaly we would probably have an instance of monitoring stack per ISV. There are many configurations to deploy, they can be found in /docs/api.md.
Deploy an instance with a remoteWrite target in the prometheusConfig
kubectl apply -f -<<EOF
apiVersion: monitoring.rhobs/v1alpha1
kind: MonitoringStack
metadata:
name: starburst
spec:
logLevel: debug # debug
prometheusConfig:
remoteWrite:
- url: http://gmail.com
EOF
This app generates metrics.
kubectl apply -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
EOF
We are deploying a service monitoring to scape from services containing label app: example-app, the app we just deployed.
k create -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
EOF
Deploy this custom app so that we can scrap configs from it. This app records http_requests_total, response_status, http_response_time_seconds.
k create -f -<<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: blue
version: v1
name: blue
namespace: default
spec:
ports:
- port: 9000
name: http
nodePort: 32002
selector:
app: blue
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: blue
version: v1
name: blue
namespace: default
spec:
selector:
matchLabels:
app: blue
version: v1
replicas: 1
template:
metadata:
labels:
app: blue
version: v1
spec:
serviceAccountName: blue
containers:
- image: docker.io/cmwylie19/go-metrics-ex
name: blue
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
successThreshold: 1
periodSeconds: 10
httpGet:
path: /
port: 9000
livenessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /
port: 9000
ports:
- containerPort: 9000
name: http
imagePullPolicy: Always
restartPolicy: Always
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: blue
EOF
Lets configure Prometheus to scrape this app by deploying an additional service monitoring to scape from services containing label app: blue.
k create -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: blue-app
labels:
app: blue
spec:
selector:
matchLabels:
app: blue # service labels to match
endpoints:
- port: http # name of port
EOF
Pull up grafana
k port-forward deploy/grafana-deployment -n monitoring-stack-operator 3000
Access Grafana Dashboard here
Port-forward Prometheus
k port-forward pod/prometheus-starburst-0 9090
Verify targets are configured by accessing prometheus instance targets here, you should see blue-app and example-app.
kind delete cluster --name kind; kind delete cluster --name kind-kind; docker system prune -a -f