Skip to content

Instantly share code, notes, and snippets.

View OlegGorj's full-sized avatar
🎯
Focusing

Oleg Gorodnitchi OlegGorj

🎯
Focusing
View GitHub Profile
@OlegGorj
OlegGorj / decode.md
Created September 17, 2018 18:54
Decode value from Consul
  curl -s 10.0.0.145:8500/v1/kv/my_key/my_otehr_key/this_is_the_key?dc=dc1 | jq -r '.[0].Value' | base64 --decode

Same with python (instead of jq), available on all systems with scratch installation:

curl -s 10.0.0.145:8500/v1/kv/my_key/my_otehr_key/this_is_the_key?dc=dc1 | python -c 'import json,sys;obj=json.load(sys.stdin);print obj[0]["Value"];' | base64 --decode

Sharing Clusters

This example demonstrates how to access one kubernetes cluster from another. It only works if both clusters are running on the same network, on a cloud provider that provides a private ip range per network (eg: GCE, GKE, AWS).

Setup

Create a cluster in US (you don't need to do this if you already have a running kubernetes cluster)

$ cluster/kube-up.sh
@OlegGorj
OlegGorj / kubernetes-plugin-advanced.groovy
Created August 7, 2018 18:20 — forked from iocanel/kubernetes-plugin-advanced.groovy
A example of the kubernetes plugin with multiple build containers.
//Lets define a unique label for this build.
def label = "buildpod.${env.JOB_NAME}.${env.BUILD_NUMBER}".replace('-', '_').replace('/', '_')
//Lets create a new pod template with jnlp and maven containers, that uses that label.
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.6.3-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'jnlp', image: 'jenkinsci/jnlp-slave:alpine', command: '/usr/local/bin/start.sh', args: '${computer.jnlpmac} ${computer.name}', ttyEnabled: false)],
volumes: [
persistentVolumeClaim(mountPath: '/home/jenkins/.mvnrepo', claimName: 'jenkins-mvn-local-repo'),
@OlegGorj
OlegGorj / README.md
Created August 7, 2018 15:08
Scheduling kube-dns on dedicated node pool

K8s cluster running on GCP that currently consists entirely of preemtible nodes. We're experiencing issues where kube-dns becomes unavailable (presumably because a node has been preempted). We'd like to improve the resilience of DNS by moving kube-dns pods to more stable nodes.

Objective:

Aim to to schedule system cluster critical pods like kube-dns (or all pods in the kube-system namespace) on a node pool of only non-preemptible nodes.

The solution was to use taints and tolerations in conjunction with node affinity. We created a second node pool, and added a taint to the preemptible pool.

@OlegGorj
OlegGorj / run-k8s-job.md
Created August 7, 2018 14:49
Executing k8s job via GO clinet

package main

import ( "flag" "fmt"

"k8s.io/client-go/kubernetes"
apiUnver "k8s.io/client-go/pkg/api/unversioned"
api "k8s.io/client-go/pkg/api/v1"
batchapi "k8s.io/client-go/pkg/apis/batch/v1"
@OlegGorj
OlegGorj / README.md
Created August 7, 2018 12:50
Multi DCs Consul cluster ACL-enabled config

Everything should be in one directory root.

Except services: they should be in consul.d directory located in same root.

ACL should be uploaded via UI or appropriate API call.

With that setup you can have consul binary on your host machine, in $PATH and execute commands normally (dc1, a1 - is connection server)

@OlegGorj
OlegGorj / .gitignore
Created August 7, 2018 12:36
jenkins configs to github
*
!/.gitignore
!/*.xml
!/nextBuildNumber
!/jobs
!/jobs/*
!/jobs/*/*.xml
/jobs/*/disk-usage.xml
/jobs/*/builds

In this guide, we will find out how to create a new user using Service Account mechanism of Kubernetes, grant this user admin permissions and log in to Dashboard using bearer token tied to this user.

Copy provided snippets to some xxx.yaml file and use kubectl create -f xxx.yaml to create them.

Create Service Account

We are creating Service Account with name admin-user in namespace kube-system first.

//contents of consul agent json config
{
"ports": {
"dns": 0,
"https": -1,
"serf_lan": 8301,
"serf_wan": 8302,
"server": 8300
},

It's true that swapoff -a is a silver bullet in most cases, however, certain k8s setups may really require swap. For instance, I've got a very small and cheap VM with just 1GB RAM, which I use for a personal GitLab Runner that rarely handles short CI/CD tasks. If I increase the size of the machine, I'll be paying more for a resource that's 99% idle. If I disable swap, npm install and other scripts inside the buid pods may hang because they require quite a lot of memory, although for short periods of time. Thus, a single-node kubeadm cluster with gitlab runner chart and swap is what suits me best.

Here is how I could get my mini-cluster up and running:

kubeadm reset 

## ↓ see explanation below
sed -i '9s/^/Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"\n/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf