Skip to content

Instantly share code, notes, and snippets.

@ycyr
ycyr / config.yml
Created October 17, 2020 03:45
yace config
discovery:
exportedTagsOnMetrics:
ebs:
- VolumeId
jobs:
- type: es
regions:
- us-east-1
searchTags:
- Key: type
@ycyr
ycyr / prometheus.yml
Created January 9, 2020 22:54
blackbox job in prometheus
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
@ycyr
ycyr / create_user_and_kubeconfig_rancher2.sh
Created November 14, 2019 22:18 — forked from superseb/create_user_and_kubeconfig_rancher2.sh
Create local user and generate kubeconfig in Rancher 2 via API
#!/bin/bash
RANCHERENDPOINT=https://your_rancher_endpoint/v3
# The name of the cluster where the user needs to be added
CLUSTERNAME=your_cluster_name
# Username, password and realname of the user
USERNAME=username
PASSWORD=password
REALNAME=myrealname
# Role of the user
GLOBALROLE=user
(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" \
-X PUT --data-binary @temp.json \
127.0.0.1:8001/k8s/clusters/c-XXXXX/api/v1/namespaces/$NAMESPACE/finalize
)
I was able to solve this same problem (or an apparently similar one) by patching a crd that turned out to be the cause of the problem.
First I identified the offending crd with the command
$ kubectl get crd
NAME CREATED AT
bgpconfigurations.crd.projectcalico.org 2018-10-24T14:06:47Z
clusterinformations.crd.projectcalico.org 2018-10-24T14:06:47Z
clusters.rook.io 2019-02-05T08:36:08Z
felixconfigurations.crd.projectcalico.org 2018-10-24T14:06:47Z
globalnetworkpolicies.crd.projectcalico.org 2018-10-24T14:06:47Z
@ycyr
ycyr / basic-ldap.py
Created April 16, 2019 20:23
old version basic-ldap.py
import ldap
from flask import current_app, jsonify, request
from flask_cors import cross_origin
from alerta.auth.utils import create_token, get_customers
from alerta.exceptions import ApiError
from alerta.models.permission import Permission
from alerta.models.user import User
from alerta.utils.audit import auth_audit_trail
@ycyr
ycyr / grafana-exporter.sh
Created April 9, 2019 19:05
grafana-exporter
#!/bin/bash
#
# add the "-x" option to the shebang line if you want a more verbose output
#
# set some colors for status OK, FAIL and titles
SETCOLOR_SUCCESS="echo -en \\033[0;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_NORMAL="echo -en \\033[0;39m"
SETCOLOR_TITLE_PURPLE="echo -en \\033[0;35m" # purple
@ycyr
ycyr / grafana-dashboard-exporter
Created April 9, 2019 19:04 — forked from crisidev/grafana-dashboard-exporter
Command to export all grafana 2 dashboard to JSON using curl
KEY=XXXXXXXXXXXX
HOST="https://metrics.crisidev.org"
mkdir -p dashboards && for dash in $(curl -k -H "Authorization: Bearer $KEY" $HOST/api/search\?query\=\& |tr ']' '\n' |cut -d "," -f 5 |grep slug |cut -d\" -f 4); do
curl -k -H "Authorization: Bearer $KEY" $HOST/api/dashboards/db/$dash > dashboards/$dash.json
done
/<profileCheckoutResponse xmlns=\"www\.optimalpayments\.com/checkout\">(?:\n.+?)+<decision>(?P<decision>\w+)</decision>$\n\s+<code>(?P<code>\d+)</code>(?:\n.+?)+<paymentMethod>(?P<paymentMethod>\w+)</paymentMethod>(?:\n.*?)+</profileCheckoutResponse>/m
@ycyr
ycyr / docker-compose.yml
Created February 18, 2019 05:30
docker-compose alerta
version: '3.1'
services:
web:
build: .
container_name: alerta-web
environment:
- DATABASE_URL=mongodb://db:27017/monitoring
ports:
- 8080:8080
depends_on: