Skip to content

Instantly share code, notes, and snippets.

@cmwylie19
Last active March 21, 2022 19:33
Show Gist options
  • Select an option

  • Save cmwylie19/51b39afa426b8f194a0bc146986a52e7 to your computer and use it in GitHub Desktop.

Select an option

Save cmwylie19/51b39afa426b8f194a0bc146986a52e7 to your computer and use it in GitHub Desktop.
Manual deployment of the Monitoring Stack Operator on OpenShift 4.10.3

MSO Manual Deploy

The preceding steps have been tested on an OSD cluster running OCP 4.10.3

Clone the MSO Repo

Clone and cd into directory of the repo.

git clone https://github.com/RHEcosystemAppEng/monitoring-stack-operator

cd monitoring-stack-operator

Build Image

Build/Push the image, you can skip this step if you would like and use my personal image.

docker build -t docker.io/cmwylie19/mso . -f build/Dockerfile;

docker push docker.io/cmwylie19/mso 

Apply CRDS

This creates AlertManagerConfigs, AlertManagers, Kustomization, PodMonitors, Probes, Prometheuses, PrometheusRules, ServiceMonitors, and ThanosRulers CRDs. (They already exist, this is for completeness)

kubectl create -f deploy/crds/kubernetes

output

Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/alertmanagerconfigs.yaml": customresourcedefinitions.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/alertmanagers.yaml": customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/podmonitors.yaml": customresourcedefinitions.apiextensions.k8s.io "podmonitors.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/probes.yaml": customresourcedefinitions.apiextensions.k8s.io "probes.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/prometheuses.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/prometheusrules.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/servicemonitors.yaml": customresourcedefinitions.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" already exists
Error from server (AlreadyExists): error when creating "deploy/crds/kubernetes/thanosrulers.yaml": customresourcedefinitions.apiextensions.k8s.io "thanosrulers.monitoring.coreos.com" already exists
error validating "deploy/crds/kubernetes/kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

Apply the dependencies

The dependencies are the Prometheus Operator Deployment, ClusterRole, ClusterRoleBinding, and ServiceAccount.

kubectl create -f deploy/dependencies

output

clusterrolebinding.rbac.authorization.k8s.io/monitoring-stack-operator-prometheus-operator created
clusterrole.rbac.authorization.k8s.io/monitoring-stack-operator-prometheus-operator created
deployment.apps/monitoring-stack-operator-prometheus-operator created
serviceaccount/monitoring-stack-operator-prometheus-operator-sa created
error: unable to recognize "deploy/dependencies/kustomization.yaml": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"

Apply Monitoring Stack CRDs

Create the monitoringstacks.monitoring.rhobs and thanosqueriers.monitoring.rhobs CRDs

kubectl apply -f deploy/crds/common 

output

customresourcedefinition.apiextensions.k8s.io/monitoringstacks.monitoring.rhobs created
customresourcedefinition.apiextensions.k8s.io/thanosqueriers.monitoring.rhobs created
error: error validating "deploy/crds/common/kustomization.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false

Deploy the Operator Manifests

Now, deploy the Monitoring Stack Operator Deployment, ClusterRoleBinding, and ServiceAccount.

kubectl create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: null
  name: mso-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: null
  name: mso-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: mso-operator
  namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: mso-operator
  name: mso-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mso-operator
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: mso-operator
    spec:
      serviceAccountName: mso-operator
      containers:
      - image: cmwylie19/mso
        name: mso
        ports:
        - containerPort: 8080
        resources: {}
EOF

output

serviceaccount/mso-operator created
clusterrolebinding.rbac.authorization.k8s.io/mso-admin created
deployment.apps/mso-operator created

Deploy instance of Monitoring Stack Operator

Deploy an instance of the MSO

kubectl create -f -<<EOF
apiVersion: monitoring.rhobs/v1alpha1
kind: MonitoringStack
metadata:
  name: starburst
spec:
  logLevel: debug # debug 
EOF

output

monitoringstack.monitoring.rhobs/starburst created

Deploy an app that produces metrics

This example app generates metrics.

kubectl create -f -<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-app
        image: fabxc/instrumented_app
        ports:
        - name: web
          containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
  name: example-app
  labels:
    app: example-app
spec:
  selector:
    app: example-app
  ports:
  - name: web
    port: 8080
EOF

output

deployment.apps/example-app created
service/example-app created

Deploy a prometheus serviceMonitor

We are deploying a service monitoring to scape from services containing label app: example-app, the app we just deployed.

kubectl create -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-app
  labels:
    team: frontend
spec:
  selector:
    matchLabels:
      app: example-app
  endpoints:
  - port: web
EOF

output

servicemonitor.monitoring.coreos.com/example-app created

Interact with Prometheus

Port-forward Prometheus instance to localhost 9090

kubectl port-forward pod/prometheus-starburst-0 9090

Verify targets are configured by accessing prometheus instance targets here, you should see example-app.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment