Using CentOS 7
[stack@droctagon4 devstack]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
Following the steps using video in this article
Using CentOS 7
[stack@droctagon4 devstack]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
Following the steps using video in this article
A Kubernetes liveness probe tells Kubernetes if a given pod is alive or dead using a custom command (as opposed to default behavior where if a foreground process in a container has terminated, it stops the container which is how it would know the container is alive/dead)
My current knowledge tells me that the lowest possible livenessSeconds
setting is 1 second.
According to the Kubernetes API defaults.go
file, there is apparently a default of 10 seconds if you enter a value of zero.
I tried to posit (ahem wishfully think ahem) that zero may have meant "as often as possible" but, this experiment and research has shown otherwise.
Let's take a look at how Kubernetes jobs are crafted. I had been jamming some kind of work-around shell scripts in the entrypoint* for some containers in the vnf-asterisk project that Leif and I have been working on. And that's not perfect when we can use Kubernetes jobs, or in their new parlance, "run to completion finite workloads" (I'll stick to calling them "jobs"). They're one-shot containers that do one thing, and then end (sort of like a "oneshot" of systemd units, at least how we'll use them today). I like the idea of using them to complete some service discovery for me when other pods are coming up. Today we'll fire up a pod, and spin up a job to discover that pod (by querying the API for info about it), and put info into etcd. Let's get the job done.
This post also exists as a [gist on github](https
So here's my results generally.... Using the below extensions.conf and JS file. It's based entirely on the example monkeys playback demo
I originate the call...
0fa669f1fad8*CLI> channel originate LOCAL/123@inbound application wait 1
-- Called 123@inbound
-- Executing [123@inbound:1] NoOp("Local/123@inbound-00000010;2", "Inbound call") in new stack
-- Executing [123@inbound:2] Answer("Local/123@inbound-00000010;2", "") in new stack
# General system setup | |
yum update -y | |
reboot | |
yum install docker | |
systemctl enable docker | |
systemctl start docker | |
docker ps | |
docker -v | |
yum install -y wget |
# Create the clusterrole and clusterrolebinding: | |
# $ kubectl create -f kube-flannel-rbac.yml | |
# Create the pod using the same namespace used by the flannel serviceaccount: | |
# $ kubectl create --namespace kube-system -f kube-flannel.yml | |
--- | |
kind: ClusterRole | |
apiVersion: rbac.authorization.k8s.io/v1beta1 | |
metadata: | |
name: flannel | |
rules: |
FROM centos:centos7 | |
RUN yum install -y epel-release | |
RUN yum install -y nginx | |
ADD pickle-man.png /usr/share/nginx/html/pickle-man.png | |
ADD pickle.png /usr/share/nginx/html/pickle.png | |
ADD entrypoint.sh /entrypoint.sh | |
ENTRYPOINT /entrypoint.sh |
apiVersion: extensions/v1beta1 | |
kind: Deployment | |
metadata: | |
name: {{ template "fullname" . }} | |
labels: | |
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" | |
spec: | |
replicas: {{ .Values.replicaCount }} | |
template: | |
metadata: |
FROM centos:centos7 | |
RUN yum install -y epel-release | |
RUN yum install -y nginx | |
ADD index.html /usr/share/nginx/html/index.html | |
CMD nginx -g 'daemon off;' |
# Create the clusterrole and clusterrolebinding: | |
# $ kubectl create -f kube-flannel-rbac.yml | |
# Create the pod using the same namespace used by the flannel serviceaccount: | |
# $ kubectl create --namespace kube-system -f kube-flannel.yml | |
--- | |
kind: ClusterRole | |
apiVersion: rbac.authorization.k8s.io/v1beta1 | |
metadata: | |
name: flannel | |
rules: |