Skip to content

Instantly share code, notes, and snippets.

#!/usr/bin/env bash
# by casey siens
#RUN THIS AS ROOT. YOU MUST BE ABLE TO SSH TO EACH NODE AS ROOT! IGNORE THE RED WARNINGS WHEN THE SERVICES START DURING THE INSTALL!!! ALLOW THE SCRIPT TO FINISH!!!
#List of master node ips.
ha_master_ip_list="10.9.8.21 10.9.8.22 10.9.8.23"
#VIP for ha.
ha_vip="10.9.8.20"
@arashkaffamanesh
arashkaffamanesh / RKE-TF-installer.sh
Created February 9, 2020 08:01 — forked from csiens/RKE-TF-installer.sh
Wrapper to setup an RKE cluster with TungstenFabric as the CNI
#!/usr/bin/env bash
#
# Run this as root on the first master node. You must be able to ssh as the root user to each node via ssh keys
# installed at /root/.ssh/ on the first master node. The public ssh key MUST be in the /root/.ssh/authorized_keys
# file on ALL nodes including the first master. Use "ssh-keygen" to create an ssh keypair and use "ssh-copy-id NODE_IP"
# to distribute the public key to ALL nodes. The nodes also need to be configured for passwordless sudo, most cloud
# providers and infrastructure provisioners do this by default.
#
# The following commands are used to perpare a generic EC2 or GCE instance and run the script.
# # enter an interactive sudo session
1) Install Ubuntu on nodes and set hostname and IP on all nodes
2) Prepare nodes. Run these commands as the root user on all nodes
#turn off swap
swapoff -a
#install packages
apt-get install -y ntp docker.io
export TF_VAR_TOKEN_KEY=<rancher bearer token>
export TF_VAR_AWS_ACCESS_KEY_ID=xxxxxxxxxx
export TF_VAR_AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export TF_VAR_AWS_DEFAULT_REGION=eu-central-1
export AWS_DEFAULT_REGION=eu-central-1
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineDeployment
metadata:
labels:
nodepool: nodepool-0
name: aws-cluster-1-md-0
namespace: default
spec:
clusterName: aws-cluster-1
replicas: 1
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Machine
metadata:
labels:
cluster.x-k8s.io/control-plane: "true"
name: aws-cluster-1-controlplane-0
namespace: default
spec:
bootstrap:
configRef:
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: aws-cluster-1
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
@arashkaffamanesh
arashkaffamanesh / README.md
Created December 18, 2019 22:15 — forked from mhausenblas/README.md
Scripting EKS on ARM

EKS on ARM

The xarm-install.sh script allows you to install and use Amazon EKS on ARM (xARM) with a single command.

Make sure you have aws, eksctl, kubectl, and jq installed. So far tested with bash on macOS.

chmod +x xarm-install.sh

./xarm-install.sh
# generate CA key and certificate
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt
# generate server key
# generate CSR (certificate sign request) to obtain certificate
openssl genrsa -des3 -out server.key 1024
openssl req -new -key server.key -out server.csr
# sign server CSR with CA certificate and key
@arashkaffamanesh
arashkaffamanesh / create_user_and_kubeconfig_rancher2.sh
Created December 9, 2019 11:14 — forked from superseb/create_user_and_kubeconfig_rancher2.sh
Create local user and generate kubeconfig in Rancher 2 via API
#!/bin/bash
RANCHERENDPOINT=https://your_rancher_endpoint/v3
# The name of the cluster where the user needs to be added
CLUSTERNAME=your_cluster_name
# Username, password and realname of the user
USERNAME=username
PASSWORD=password
REALNAME=myrealname
# Role of the user
GLOBALROLE=user