The preceding steps have been tested on an OSD cluster running OCP 4.10.3
- Clone the Monitoring Stack Operator repo
- (Optionally) Build Image
- Apply CRDs
- Apply the dependencies
- Deploy the Operator Manifests
- Deploy Monitoring Stack Operator
- Deploy an app that produces metrics
- Deploy a prometheus ServiceMonitor
- Interact with Prometheus
Clone and cd into directory of the repo.
git clone https://github.com/RHEcosystemAppEng/monitoring-stack-operator
cd monitoring-stack-operator
Build/Push the image, you can skip this step if you would like and use my personal image.
docker build -t docker.io/cmwylie19/mso . -f build/Dockerfile;
docker push docker.io/cmwylie19/mso
This creates AlertManagerConfigs, AlertManagers, Kustomization, PodMonitors, Probes, Prometheuses, PrometheusRules, ServiceMonitors, and ThanosRulers CRDs. (They already exist, this is for completeness)
kubectl create -f deploy/crds/kubernetes
output
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/alertmanagerconfigs.yaml": customresourcedefinitions.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/alertmanagers.yaml": customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/podmonitors.yaml": customresourcedefinitions.apiextensions.k8s.io "podmonitors.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/probes.yaml": customresourcedefinitions.apiextensions.k8s.io "probes.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/prometheuses.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/prometheusrules.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/servicemonitors.yaml": customresourcedefinitions.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/thanosrulers.yaml": customresourcedefinitions.apiextensions.k8s.io "thanosrulers.monitoring.coreos.com" already exists
error validating "deploy/crds/kubernetes/kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
The dependencies are the Prometheus Operator Deployment, ClusterRole, ClusterRoleBinding, and ServiceAccount.
kubectl create -f deploy/dependencies
output
clusterrolebinding.rbac.authorization.k8s.io/monitoring-stack-operator-prometheus-operator created
clusterrole.rbac.authorization.k8s.io/monitoring-stack-operator-prometheus-operator created
deployment.apps/monitoring-stack-operator-prometheus-operator created
serviceaccount/monitoring-stack-operator-prometheus-operator-sa created
error: unable to recognize "deploy/dependencies/kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
Create the monitoringstacks.monitoring.rhobs and thanosqueriers.monitoring.rhobs CRDs
kubectl apply -f deploy/crds/common
output
customresourcedefinition.apiextensions.k8s.io/monitoringstacks.monitoring.rhobs created
customresourcedefinition.apiextensions.k8s.io/thanosqueriers.monitoring.rhobs created
error: error validating "deploy/crds/common/kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
Now, deploy the Monitoring Stack Operator Deployment, ClusterRoleBinding, and ServiceAccount.
kubectl create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: mso-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: mso-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: mso-operator
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mso-operator
name: mso-operator
spec:
replicas: 1
selector:
matchLabels:
app: mso-operator
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mso-operator
spec:
serviceAccountName: mso-operator
containers:
- image: cmwylie19/mso
name: mso
ports:
- containerPort: 8080
resources: {}
EOF
output
serviceaccount/mso-operator created
clusterrolebinding.rbac.authorization.k8s.io/mso-admin created
deployment.apps/mso-operator created
Deploy an instance of the MSO
kubectl create -f -<<EOF
apiVersion: monitoring.rhobs/v1alpha1
kind: MonitoringStack
metadata:
name: starburst
spec:
logLevel: debug # debug
EOF
output
monitoringstack.monitoring.rhobs/starburst created
This example app generates metrics.
kubectl create -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
EOF
output
deployment.apps/example-app created
service/example-app created
We are deploying a service monitoring to scape from services containing label app: example-app, the app we just deployed.
kubectl create -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
EOF
output
servicemonitor.monitoring.coreos.com/example-app created
Port-forward Prometheus instance to localhost 9090
kubectl port-forward pod/prometheus-starburst-0 9090
Verify targets are configured by accessing prometheus instance targets here, you should see example-app.