-
-
Save MithunArunan/86629e5262a020063f7f4011e78f382e to your computer and use it in GitHub Desktop.
Operating-system-level virtualization, also known as containerization, refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers, partitions, virtualization engines (VEs) or jails (FreeBSD jail or chroot jail), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside a container can only see the container's contents and devices assigned to the container.
- Bundle all your application dependencies into a docker image
- Portable, you can run it as a container on a MAC, Windows and linux server.
- Consistent - Across dev, staging and production cluster
- Reusable - DockerHub contains official images
FROM <base_image> RUN ENV ARG RUN CMD
Multi stage docker builds can help remove sensitive information from the docker image.
Before deploying any image let’s create another tag, preferably not with latest. :master
<image-name>:<version>
<image-name>:<version>-<commit-id-7chars>
Collector - FluentD/Beats (Filebeat/Metricbeat)
Backend store - ES
Visualization - Kibana
- Collect stdout/stderr logs using fluentd in kubernetes cluster as DaemonSet.
- Add kubernetes metadata to the logs
- Logrotate and Backup all the raw logs to s3 with kubernetes metadata (if needed to use other than ES as a backend store)
- Store all the logs in elastic search backend in a parsed format
- Backup all the elastic search index periodically
- Connect Kibana dashboard to ES backend and query the logs
- Control Plane
- Data Plane
Managed Istio on GKE
curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
aws iam create-group --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam create-user --user-name kops
aws iam add-user-to-group --user-name kops --group-name kops
aws iam create-access-key --user-name kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
aws s3api create-bucket \
--bucket product-example-com-state-store \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
export NAME=product.k8s.local
export KOPS_STATE_STORE=s3://product-example-com-state-store
aws ec2 describe-availability-zones --region us-west-2
kops create cluster \
--zones us-west-2a \
${NAME}
kops edit cluster ${NAME}
kops update cluster ${NAME} --yes
kops get nodes
kops validate cluster
kops delete cluster --name ${NAME}
kops delete cluster --name ${NAME} --yes
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kops get secrets kube --type secret -oplaintext
https://kubernetes.io/docs/getting-started-guides/scratch/
https://github.com/kubernetes/kops
https://github.com/kubernetes/kops/blob/master/docs/aws.md
https://kubernetes.io/docs/getting-started-guides/kops/
https://kubernetes.io/docs/getting-started-guides/aws/
https://kubernetes.io/docs/getting-started-guides/kubespray/
//to download ksonnet for linux (including Cloud Shell)
KS_VER=ks_0.9.2_linux_amd64
//to download ksonnet for macOS
KS_VER=ks_0.9.2_darwin_amd64
//download tar of ksonnet
wget https://github.com/ksonnet/ksonnet/releases/download/v0.9.2/$KS_VER.tar.gz
//unpack file
tar -xvf $KS_VER.tar.gz
//add ks command to path
PATH=$PATH:$(pwd)/$KS_VER
kubectl create clusterrolebinding default-admin \
--clusterrole=cluster-admin --user=$(gcloud config get-value account)
Follow steps in User guide
- Automates deployments, scaling
- Deploys containers based on OS level virtualization instead of hardware level virtualization
- Decoupled from the underlying infrastructure and the os distributions
- Fast, lightweight and portable.
- Service discovery
- Load balancing
- Secrets
- Health checks
- Auto scaling/restart/healing of nodes
- Zero downtime deploys
- Bound to particular Cloud Provider (GCP, AWS or Azure).
- OnPremise Deployment.
- Downtime on any production change.
- Cost incurred.
- Storage.
- Modular infrastructure as code.
Maintains the health of cluster. Interacts with underlying cloud providers.
kube-apiserver
kube-scheduler
kube-controller-manager
etcd
cloud-controller-manager
addons
DNS
WEB UI
Runs all the workloads. 2 process - kubelet, kube-proxy
Smallest and simplest unit in kube object model. Running a single process in a cluster, can contain one or multiple containers.
Logical set of pods. Load balances between the pods. Though each pods have an ip you need service to expose them to public
default
kube-system
kube-public
- Declarative updates for pods
openssl genrsa -out mithun.key 2048
openssl req -new -key mithun.key -out mithun.csr -subj "/CN=mithun/O=admin"
openssl x509 -req -in mithun.csr -CA /etc/kubernetes/pki/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500
kubectl config set-cluster <cluster_name> --server=https://<master-node-ip>:<master-node-port> --insecure-skip-tls-verify=true
kubectl config get-clusters
kubectl config set-credentials <cluster_name> --client-certificate= --client-key= --cluster=<cluster_name>
kubectl config set-credentials <cluster_name> --username=<username> --password=<password> --cluster=<cluster_name>
kubectl config set-context <cluster_name> --user=<cluster_name> --cluster=<cluster_name>
kubectl config use-context <cluster_name>
kubectl config view
kubectl get pods
Grouping all the kubernetes and docker configurations in one place k8s-configs dockerfiles - base docker images
Vault (for storing secrets) Vault-ui Kube-ops-view All other microservices
Create a label ‘app’ for grouping pods
Use ClusterIP for exposing the services internally, let’s create ingress when we would like to expose them to public.
ClusterIP - Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
LoadBalancer - Exposes the service externally using a cloud provider’s load balancer
NodePort - Exposes the service on each Node’s IP at a static port (the NodePort)
kubectl apply -f k8s-spec-directory/ → kubectl apply -f juno/
telepresence --swap-deployment voice-worker --docker-run -it -v $PWD:/home/voice-worker gcr.io/vernacular-tools/voice-services/voice-worker:1
docker pull vault docker pull consul docker pull djenriquez/vault-ui
docker run --cap-add=IPC_LOCK -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=roottoken' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200' -d --name=vault vault docker run -d -p 8201:8201 -e PORT=8201 -e VAULT_URL_DEFAULT=http://192.168.12.155:8200 -e VAULT_AUTH_DEFAULT=GITHUB --name vault-ui djenriquez/vault-ui
-
Telepresence
-
Dockers for development
-
Helm
Kubernetes - Design principles
Kubernetes configuration examples
- Install python3.6 & pip3
sudo apt update
sudo apt install python3-pip
-
Clone Kubespray repository
-
Setup using the following in the automation server
sudo python3 -m pip install -r requirements.txt
Ansible (v2.5+), python-netaddr and Jinja (2.9+)
Target servers must have access to docker image registry.
Configure target servers to allow IPv4 forwarding.
Copy SSH key to to all the target servers part of the inventory.
Disable firewall in the network of the target servers.
If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the ansible_become flag or command parameters --become or -b should be specified
cp -rfp inventory/sample inventory/voice-cluster
declare -a IPS=(10.160.0.2 10.160.0.3 10.160.0.4)
CONFIG_FILE=inventory/voice-cluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
cat inventory/voice-cluster/hosts.ini
IMPORTANT: Edit my_inventory/groups_vars/*.yaml to override data vars:
ansible-playbook -i inventory/voice-cluster/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key
ansible-playbook -u root -b -v -i inventory/voice-cluster/hosts.ini cluster.yml --private-key=~/.ssh/google_compute_engine
cat inventory/voice-cluster/credentials/kube_user.creds
ssh -i ~/.ssh/google_compute_engine [email protected]
ssh -i ~/.ssh/google_compute_engine [email protected]
ssh -i ~/.ssh/google_compute_engine [email protected]
Installing Kubernetes On-premises with Kubespray Kubespray - GitHub
Tim Hockin - Crash Course on Container Orchestration
Harry Zhang - Kubernetes - Walkthrough