Skip to content

Instantly share code, notes, and snippets.

Prometheus-Operator

Pre-requisites

Install Helm

HELM_VER=v2.8.2
wget -q https://kubernetes-helm.storage.googleapis.com/helm-${HELM_VER}-linux-amd64.tar.gz
tar -zxvf helm-${HELM_VER}-linux-amd64.tar.gz

When installing tiller the server-side component of helm in kubernetes, with the default cri-o trust level as untrusted using kata-runtime, the connection to tiller pod errors out as shown below.

E0511 00:04:03.578489   10313 portforward.go:331] an error occurred forwarding 44249 -> 44134: error forwarding port 44134 to pod dcb2b2ed780469470e4fe1ec085fa02efc492516a79b6b1ce6bb90997997fdc7, uid : exit status 1: 2018/05/11 00:04:03 socat[11733] E connect(5, AF=2 127.0.0.1:44134, 16): Connection refused
Error: could not ping Tiller: rpc error: code = Unavailable desc = transport is closing

Current work-around is to set the environment variable HELM_HOST to point to the clusterIP

The below script iterates through all the helm charts available in the stable and incubator repos. It installs, runs any registered tests and deletes the deployment.

install-test-delete () {
    name=$(echo $1 | tr '/' '-')
    helm install $1 --name $name --namespace helm-testing --wait --timeout 60
    helm test $name --timeout 60
    helm delete --purge $name --timeout 60
}
@krsna1729
krsna1729 / label-node-incluster.yaml
Created October 8, 2018 16:15
add label to a node from within the cluster
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kata-label-node
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:

On the external system, still on same subnet as k8s node, add route to kube-dns via the k8s node IP (master here)

sudo ip route add 10.96.0.10 via 10.54.81.161

Note: In case they are not in same L2 domain, need to provide a routable IP for kube-dns service ClusterIP and ensure return route is handled from the cluster.

At this point dig explicitly pointing to kube-dns should resolve a running service in your k8s cluster

Apply the below yaml file to create sgw-s11 service with static endpoint IPs

kubectl apply -f custom-dns.yaml

Verify DNS resolution

dig @10.96.0.10 sgw-s11.default.svc.cluster.local +short

Here we register two custom DNS services sgw-s1u and dp-cpdp for our Dataplane, with Consul

Set necessary variables

IID=$(hostname | awk -F '-' '{print $NF}')
ETH0_IP=$(netstat -ie | grep -A1 eth0 | tail -1 | awk '{print $2}' | tr -d addr:)
SGW_S1U_IP=$(netstat -ie | grep -A1 s1u-net | tail -1 | awk '{print $2}' | tr -d addr:)
controller='ngic-fi'

Pre-reqs

All nodes required to run cilium must have kernel version 4.8 and above

On every node in the cluster mount bpf filesystem

sudo mount bpffs /sys/fs/bpf -t bpf
@krsna1729
krsna1729 / Dockerfile
Last active January 13, 2019 23:42
BESS Multi-stage Dockerfile
# Multi-stage Dockerfile
# Stage bess-build: builds bess with its dependencies
FROM nefelinetworks/bess_build AS bess-build
ARG BESS_COMMIT=master
RUN apt-get update && apt-get install -y wget unzip ca-certificates git
RUN wget -qO bess.zip https://github.com/NetSys/bess/archive/${BESS_COMMIT}.zip && unzip bess.zip
WORKDIR bess-${BESS_COMMIT}
RUN ./build.py bess && cp bin/bessd /bin
RUN mkdir -p /opt/bess && cp -r bessctl pybess /opt/bess
wget https://downloadmirror.intel.com/25791/eng/XL710_NVMUpdatePackage_v6_01_Linux.tar.gz
tar xvzf XL710_NVMUpdatePackage_v6_01_Linux.tar.gz
cd XL710/Linux_x64/
./nvmupdate64e