Greek for "Helmsman"; also the root of the words "governor" and "cubernetic"
Table of Contents
Links
- Documentation Home
- The Illustrated Children's Guide to Kubernetes
Requirements
- Docker
- VirtualBox
- Kubectl
- Minikube
Mac
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
Linux
curl -O https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
- Deployments
- User the "record" option for easier rollbacks
kubectl apply -f deployment.yaml --record
- Use plenty of descriptive labels
- App: nifty
- Phase: Dev
- Role: BE
- Use sidecar containers for proxies, watchers, etc
- Don't use sidecars for bootstrapping!
- Use init containers instead!
- Don't use
:latest
or no tag - Readiness Probe - is the app ready to start service traffic?
- Won't be added to a service endpoint until it passes
- Required for a "production app"
- Liveness Probe - is the app still running?
- Default is "process is running"
- Possible that the process can be running but not working correctly
- Good to define, might not be 100% necessary
- User the "record" option for easier rollbacks
- Services
- Don't always use
type: LoadBalancer
- Ingress
type: NodePort
cant be "good enough"- Use Static IPs. Theuy are free!
- Map external services to internal ones
- Don't always use
- Application Architecture
- Cluster Management
- Resources, Anti-Affinity, and Scheduling
- Use Namespaces to split up your cluster
- Role Based Access Control
- Chaos Monkey
- Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit.
- Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.
- Cluster Components
- Master
- Manages and coordinates the cluster (scheduling applications, maintaining applications desired state, scaling applications, and rolling out new updates).
- Nodes
- Workers that run applications.
- Runs a
kubelet
agent that communicates with the master via the Kubernetes API. - Pods run on nodes, containers run in pods.
- Master
- Deployment: Responsible for creating and updating instances of your application. Lives on your master node.
- Deployment Controller: continuously monitors the deployed instances.
- Scaling is accomplished by changing the number of replicas in a Deployment.
Users -> Control Plane -> Nodes
- Users
- API
- CLI
- UI
- Control Plane (maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state)
- Nodes
- Post desired state (aka spec) via API
- Placement (aka scheduling) figures out on which node to run the task.
- Assignment (aka binding) is when the control plane tells the node to run the task.
- Kubelete fetches container image
- Kubelete sends status of container to control plane
- User can qwuery if their task is running
- Pods are a building block of the infrastructure.
- In essense this is a group of containers (associated volumes, networking, image version, ports, etc) sharing the same networking and host linux namespaces.
- We create pods and in turn they create containers
- Tightly coupled
- the atom of replicaton and placement
- "logical" host for contaienrs
- each pod gets an IP address
- share data: localhost, volumes, IPC, etc.
- Facilitates composite applications (side cars)
- preserves 1:1 app to image
- Storage automatically attached to pod.
- Local scratch directories created on demand
- Cloud block storage
- GCE Persistent Disk
- AWS Elastic Block Storage
- Cluster storage
- File: NFS, Gluster, Ceph
- Block: iSCSI, Cinder, Ceph
- Special Voumes
- Git repository
- Secret
- Critical building block for higer-level automation
- Inject them as "virtual volumes" into Pods
- not baked into images nor pod configs
- kept in memory - never touches disk
- not coupled to non-portable metadata API
- Manage secrest via the Kubernetes API
- Ensures N copies of a Pod
- grouped by a label selector
- Explicit specification of desired scale
- enables self-healing
- facilitates auto-scaling
- A group of pods that work together
- Services match a set of Pods using labels and selectors
- Publishes how to access the service
- DNS Name
- DNS SRV records for ports
- Kubernetes Endpoints API
- Define access policy
- Load-balanced: name maps to stable virtual IP
- "Headless": name maps to set of pod IPs
- Hides complextiy - ideal for non-native apps
- Decoupled from Pods and ReplicationControllers
- Defined using YAML.
- ClusterIP (default): Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
- NodePort: Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using
:
. Superset of ClusterIP. - LoadBalancer: Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
- ExternalName: Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.
- Manages pods that run to completion
- differentiates number running at any one time from the total number of completed runs
- Similar to ReplicationController, but for pods that don't always restart
- workflow: restart on failure
- build/test: don't restart on app failure
- Principle: do one thing, don't overload
- Runs a Pod on every node
- or a selected subset of nodes
- Not a fixed umber of replicas
- created and deleted as nodes come and go
- Useful for running cluster-wide services
- logging agents
- storage systems
- DaemonSet manager is both a controller and scheduler
- Rollouts as a service
- updates to pod template will be rolled out by controller
- can chose between rolling update and recreate
- Enables declarative updates
- manipulates replication controllers
- Promote an application from one environment to another (via container image updates)
- Apache Stratos
- Openshift 3
- Deis
- Gondor
- Declarative > Imperative: State your desired results, let the system actuate
- Control loops: Observe, rectify, repeat
- Simple > Complex: Try to do as little as possible
- Modularity: Components, interfaces, & plugins
- Legacy compatible: Requiring apps to change is a non-starter
- Network-centric: IP address are cheap
- No grouping: Labels are the only groups
- Cattle > Pets: Manage your workload in bulk
- Open > Close: Open Source, standards, REST, JSON, etc