Skip to content

Instantly share code, notes, and snippets.

View alena1108's full-sized avatar

Alena Prokharchyk alena1108

View GitHub Profile
package main
import (
"flag"
"fmt"
v1 "k8s.io/api/core/v1"
"os"
"path/filepath"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Here are some PR review practices I follow. The list can be extended, as I'm sure every developer on the team has a unique perspective that can be useful to others.
So you got the PR for review. There are a couple of logistical things to verify first:
* Make sure there is an issue linked to the PR.
* If the PR a bug fix, make sure the issue has clear steps to reproduce and validate. It's a good idea to include a quick summary to the pr itself. Good example: https://github.com/rancher/rke/pull/1752#issue-336681310
* If the PR is a new feature or enhancemenet, there should be a functional spec/design doc listing all the use cases.
* For bigger features, it makes sense to reach out to feature engineer(s) so they can walk you throug the high level logic
After you understood the feature/fix speficis, it's time to start the actual code review. Here are several items I pay particular attention to in a context of Rancher PR code reviews:
**What kind of request is this (question/bug/enhancement/feature request):** bug
**Steps to reproduce (least amount of steps as possible):**
- Enable custom config in an exiting rke cluster.
- Edit cluster, Add this in the YAML file
```
services:
kube-api:
**What kind of request is this (question/bug/enhancement/feature request):** bug
**Steps to reproduce (least amount of steps as possible):**
- Enable custom config in an exiting rke cluster.
- Edit cluster, Add this in the YAML file
```
services:
To rotate custom config
==================================
- you add the new key to the config, in the first place in the keys list. Do not remove the old key. It will be second in the list
- Run rke up, this will deploy the config, and rewrite the secrets with the new key
- remove the old key from the config
- run rke up, this will remove the old key from the config on the servers..
* Note that you can't use the same key name!
* No manual steps are needed. RkE will handle secrets re-encryption
Things to improve:
Process:
===========================
* UX mockups and discussions early on
* UX feature acceptance right before the release
* Enhancements (especially ones involving the UI) should be a part of sprint planning/demo similar to bigger features
* Best practices when it comes to PR submission. Templatize why/how for the fix so anybody can pick it up for the review
* Feedback loop with support on custom scripts
@alena1108
alena1108 / images.sh
Last active June 18, 2020 23:06
images digests generation
while read in
do
docker pull "$in"
repo=$(echo $in | cut -f1 -d/)
image_tmp=$(echo $in | cut -f2 -d/)
image=$(echo $image_tmp | cut -f1 -d:)
tag=$(echo $image_tmp | cut -f2 -d:)
docker images --digests | grep "$image" | grep "$repo" | grep "$tag" | awk '{print "| " $1 ":" $2 " | " $3 " |"}' | sed 's/| //g' | sed 's/ |//g' >> rancher-images-digests.txt
docker rmi "$in"
done < rancher-images.txt
In Rancher 2.0 and 2.1, the auto generated certificates for Rancher provisioned clusters have 1 year of expiry. It means if you created a Rancher provisioned cluster about 1 year ago, you need to rotate the certificates, otherwise the cluster will go into a bad state when the certificates expire. It is better to rotate the certificates before they expire. The rotation is a one time operation as the newly generated certs will be valid for the next 10 years.
Rancher v2.2.4 provides UI support for certificate rotation. If upgrading your 2.0.x or 2.1.x clusters to 2.2.x is not an option, you can upgrade them to 2.0.15 and 2.1.10 respectively. These versions have a certificate rotation support via API (more instructions are here https://rancher.com/docs/rancher/v2.x/en/cluster-admin/certificate-rotation/#certificate-rotation-in-rancher-v2-1-x-and-v2-0-x)
Steps to rotate certs on a working cluster which certificates haven't expired yet
============================================================================
cmd/cloud-controller-manager/app/controllermanager.go: return c.ClientBuilder.ClientOrDie(serviceAccountName)
cmd/cloud-controller-manager/app/options/options.go: c.VersionedClient = rootClientBuilder.ClientOrDie("shared-informers")
cmd/kube-controller-manager/app/apps.go: ctx.ClientBuilder.ClientOrDie("daemon-set-controller"),
cmd/kube-controller-manager/app/apps.go: ctx.ClientBuilder.ClientOrDie("statefulset-controller"),
cmd/kube-controller-manager/app/apps.go: ctx.ClientBuilder.ClientOrDie("replicaset-controller"),
cmd/kube-controller-manager/app/apps.go: ctx.ClientBuilder.ClientOrDie("deployment-controller"),
cmd/kube-controller-manager/app/autoscaling.go: hpaClient := ctx.ClientBuilder.ClientOrDie("horizontal-pod-autoscaler")
cmd/kube-controller-manager/app/autoscaling.go: hpaClient := ctx.ClientBuilder.ClientOrDie("horizontal-pod-autoscaler")
cmd/kube-controller-manager/app/autoscaling.go: hpaClient := ctx.ClientBuilder.ClientOrDie
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 4
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true