The buildah
utility is a versitile container build tool that does not require a daemon (everything is direct invocation).
See my "deep dive" for a few hands on use-cases.
Recently knative was announced. It is a project to enable the kubernetes primitives needed to build a functions-as-a-service. There are a plumbing services needed around this use-case, "build" is one of them. Building containers is largely an indepenent goal and story of "serverless" or "FaaS", but I get why they are grouped together.
In this walkthrough, I'll show scheduling container builds using buildah, on a kubernetes cluster.
There are a few moving parts in this and since my interest is development of the stack, I'll layout running this for development not for production use.
The machine used had a few cores, 10Gb of memory (or more), and 30Gb of storage (or more).
In this doc I'm using:
- host OS of Fedora 28
- golang go1.10.3
- kubernetes v1.10.z
- cri-o v1.10.z
- buildah master (after tag v1.3)
And various dependencies. On a clean fedora-server install, I needed:
dnf install -y \
screen \
git \
make \
gcc \
gpgme-devel \
btrfs-progs-devel \
device-mapper-devel \
glib2-devel \
glibc-devel \
glibc-static \
libassuan-devel \
libgpg-error-devel \
libseccomp-devel \
libselinux-devel \
ostree-devel \
pkgconfig \
buildah
Also, firewalld got in the way at some point, so:
systemctl disable --now firewalld
We're using golang to compile several of these tools. It is packaged, though for this let's download the pre-compiled bundle to pin this version.
cd ~/
curl -sSL https://dl.google.com/go/go1.10.3.linux-amd64.tar.gz | tar xzv
export GOPATH=$HOME/gopath
export GOROOT=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
Underneath kubernetes, it needs a container runtime that it calls through the Container Runtime Interface (CRI). There are a couple now like a docker-shim, containerd and cri-o. For this walkthrough, I'm using cri-o. Their "getting started" guide walks through building and installing cri-o, and its dependencies like CNI.
For this walkthrough I cheated a bit and installed the RPM of cri-o to pull all the prepackaged, preconfigured dependencies, and then only landed a new build of cri-o from the matching branch I'm interested in.
dnf install -y cri-o
git clone --recursive https://github.com/kubernetes-incubator/cri-o $GOPATH/src/github.com/kubernetes-incubator/cri-o/
cd $GOPATH/src/github.com/kubernetes-incubator/cri-o/
git checkout v1.10.6
sudo make install.bin install.systemd
sudo systemctl enable crio
Later we will be using a local container registry that does not have TLS enabled, so we need to configure cri-o for this.
Edit /etc/containers/registries.conf
as root and add the name bananaboat.local:5000
to the [registries.insecure]
list.
It might look like this:
[registries.search]
registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.centos.org']
[registries.insecure]
registries = ['bananaboat.local:5000']
[registries.block]
registries = []
Now lets start the service:
sudo systemctl start crio
The Container Runtime Interface (CRI) is a generic approach to enabling container runtime and image operations. It is a gRPC socket with a set of protobuf defined types and methods. Any client or server side tool can know what to offer or expect.
To manage and interact with cri-o directly lets pull in crictl
.
go get -v github.com/kubernetes-incubator/cri-tools/cmd/crictl
echo "runtime-endpoint: unix:///var/run/crio/crio.sock" | sudo tee /etc/crictl.yaml
(there is a --runtime-endpoint=
flag that can be used as well)
Now we can test that it is talking to cri-o:
> sudo crictl version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.10.6
RuntimeApiVersion: v1alpha1
Cool!
Kubernetes has a number of moving parts and on the whole is a sufficiently complicated orchestrator. I will not pretend to comprehend it in entirety, nor best practices on a proper deployment. For my purposes I do want it up and schedulable.
git clone --recursive https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes
cd $GOPATH/src/k8s.io/kubernetes
git checkout v1.10.7
make
./hack/install-etcd.sh
The etcd isn't always needed, but will do a local copy of the version most closely aligned with the version of k8s we're using.
Now to run it locally using this fresh build and our existing cri-o service.
sudo \
CGROUP_DRIVER=systemd \
CONTAINER_RUNTIME=remote \
CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock --runtime-request-timeout=15m' \
PATH=$GOPATH/src/k8s.io/kubernetes/third_party/etcd:$PATH \
./hack/local-up-cluster.sh \
-o $GOPATH/src/k8s.io/kubernetes/_output/bin/
This output bunches of bootstrapping and then leave the service in your terminal's forground (so you can kill it later with crtl+c
).
Key info you'll see is the export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
.
Now we need to open a new terminal!
Lets test out the kubernetes instance.
export PATH=$GOPATH/src/k8s.io/kubernetes/_output/bin/:$PATH
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
kubectl version
with output like:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-30T17:13:17Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-30T17:13:17Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
In my testing, I am using a local container registry to push and pull images. You can use docker or podman or I guess even running the registry binary directly.
kubectl run registry --image=docker.io/registry:2 --port=5000
kubectl port-forward deployment/registry 5000:5000
knative/build adds a build and buildtemplates Custom Resource Definition (CRD) to a kubernetes cluster. The controller and webhook services run in their own namespace, though builds can be applied in their own namespaces.
It uses a build tool called ko
that emulates and wraps aspects of kubectl
to build and publish your config onto kubernetes.
export GOPATH=$HOME/gopath
export GOROOT=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
go get -u -v github.com/google/go-containerregistry/cmd/ko
ko enforces TLS unless the registry name ends with ".local".
echo "127.0.0.1 bananaboat.local" | sudo tee -a /etc/hosts
echo "::1 bananaboat.local" | sudo tee -a /etc/hosts
export KO_DOCKER_REPO="bananaboat.local:5000/farts"
This is a container image name prefix.
Pushing to a local registry allows for an arbitrary org prefix.
Ideally there would be no need to twiddle /etc/hosts
, but for now that seems to be easiest.
Now we can build and deploy knative/build!
git clone --recursive https://github.com/knative/build/ $GOPATH/src/github.com/knative/build/
export PATH=$GOPATH/src/k8s.io/kubernetes/_output/bin:$PATH
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
cd $GOPATH/src/github.com/knative/build/
ko apply -f ./config/
After some pulling, building, and publishing of container images, you ought to see output like standard kubectl apply -f ...
and a 0 exit code.
Like:
[...]
2018/08/27 13:59:37 pushed blob sha256:74862d15e6a941525e75937f56c5752891694f70f561c5d6112c3dbc6ac80281
2018/08/27 13:59:37 pushed blob sha256:4b3cca59b6b032332a7f74b3b5c6d45e5f95fe9b17c3cd4ce32a96d8367d16e5
2018/08/27 13:59:37 bananaboat.local:5000/farts/git-init-edc40519d94eade2cc2c40d754b84067:latest: digest: sha256:56236fc461ca9f3fa897eb53d55b0c77adedc2692b1bc0a451f18a3faf92a300 size: 1410
2018/08/27 13:59:37 Published bananaboat.local:5000/farts/git-init-edc40519d94eade2cc2c40d754b84067@sha256:56236fc461ca9f3fa897eb53d55b0c77adedc2692b1bc0a451f18a3faf92a300
namespace/knative-build created
clusterrole.rbac.authorization.k8s.io/knative-build-admin created
serviceaccount/build-controller created
clusterrolebinding.rbac.authorization.k8s.io/build-controller-admin created
customresourcedefinition.apiextensions.k8s.io/builds.build.knative.dev created
customresourcedefinition.apiextensions.k8s.io/buildtemplates.build.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterbuildtemplates.build.knative.dev created
service/build-controller created
service/build-webhook created
configmap/config-logging created
deployment.apps/build-controller created
deployment.apps/build-webhook created
[vbatts@bananaboat] (master) ~/src/github.com/knative/build$ echo $?
0
[vbatts@bananaboat] (master) ~/src/github.com/knative/build$ kubectl get --all-namespaces pods
NAMESPACE NAME READY STATUS RESTARTS AGE
default registry-865cc99fd9-t27lq 1/1 Running 0 16m
knative-build build-controller-fcc68dfb6-c79ml 1/1 Running 0 16m
knative-build build-webhook-99d8dc6cf-f75s6 1/1 Running 0 16m
kube-system kube-dns-659bc9899c-mkflp 3/3 Running 0 3d
Success!
There are a handful of different build templates now that leverage the abundance of ways to do builds, but give a common interface to do them. The build-templates repo has a variety of templates already.
Each template has the kind: BuildTemplate
and a distinct metadata.name
.
For example projects that use bazel.build have a bazel template to streamline their builds.
In this example we want to build projects that have a Dockerfile
, so we'll work with the ./buildah/
directory.
export GOPATH=$HOME/gopath
export GOROOT=$HOME/go
export PATH=$GOPATH/src/k8s.io/kubernetes/_output/bin:$GOPATH/bin:$GOROOT/bin:$PATH
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
git clone --recursive https://github.com/knative/build-templates/ $GOPATH/src/github.com/knative/build-templates/
cd $GOPATH/src/github.com/knative/build-templates/buildah/
kubectl apply -f ./
Should see ouptput like:
buildtemplate.build.knative.dev "buildah" created
Get more informaiont on this template via
> kubectl get buildtemplates
NAME CREATED AT
buildah 1h
> kubectl describe buildTemplate buildah
Name: buildah
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"build.knative.dev/v1alpha1","kind":"BuildTemplate","metadata":{"annotations":{},"name":"buildah","namespace":"default"},"spec":{"paramet...
API Version: build.knative.dev/v1alpha1
Kind: BuildTemplate
Metadata:
Cluster Name:
Creation Timestamp: 2018-08-27T18:29:34Z
Generation: 1
Resource Version: 289325
Self Link: /apis/build.knative.dev/v1alpha1/namespaces/default/buildtemplates/buildah
UID: 255ec34d-aa27-11e8-a94a-1c1b0d0ece5a
Spec:
Generation: 1
Parameters:
Description: The location of the buildah builder image.
Name: BUILDER_IMAGE
Description: The name of the image to push.
Name: IMAGE
Default: ./Dockerfile
Description: Path to the Dockerfile to build.
Name: DOCKERFILE
Default: true
Description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry)
Name: TLSVERIFY
Steps:
Args:
bud
--tls-verify=${TLSVERIFY}
--layers
-f
${DOCKERFILE}
-t
${IMAGE}
.
Image: ${BUILDER_IMAGE}
Name: build
Volume Mounts:
Mount Path: /var/lib/containers
Name: varlibcontainers
Args:
push
--tls-verify=${TLSVERIFY}
${IMAGE}
docker://${IMAGE}
Image: ${BUILDER_IMAGE}
Name: push
Volume Mounts:
Mount Path: /var/lib/containers
Name: varlibcontainers
Volumes:
Empty Dir:
Name: varlibcontainers
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 5s (x14 over 6m) build-controller BuildTemplate synced successfully
This template does not add any pods to the cluster, but rather now prepares the cluster for build
jobs that will use spec.template.name
of buildah
.
In this build-template/buildah
folder is a Dockerfile
to build our ${BUILDER_IMAGE}
.
buildah bud --pull --no-cache -t buildah .
buildah push --tls-verify=false buildah bananaboat.local:5000/buildah
You could do this with docker build
or img build
too, but this walkthrough is already about buildah :-)
Getting image source signatures
Copying blob sha256:1d31b5806ba40b5f67bde96f18a181668348934a44c9253b420d5f04cfb4e37a
198.64 MiB / 198.64 MiB [=================================================] 20s
Copying blob sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
1.00 KiB / 1.00 KiB [======================================================] 0s
Copying blob sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
1.00 KiB / 1.00 KiB [======================================================] 0s
Copying blob sha256:643e95679c550c8d6c63c10f4ef9195c0134255e1ea83b3ca893a6ff02cce9ac
616.89 MiB / 616.89 MiB [=================================================] 56s
Copying blob sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
1.00 KiB / 1.00 KiB [======================================================] 0s
Copying blob sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
1.00 KiB / 1.00 KiB [======================================================] 0s
Copying blob sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef
1.00 KiB / 1.00 KiB [======================================================] 0s
Copying config sha256:16fbdc0e86049deb9ca17eed8e1993ad5fb57daa6f6a863e4bd3ab3dda0517b2
3.05 KiB / 3.05 KiB [======================================================] 0s
Writing manifest to image destination
Copying config sha256:16fbdc0e86049deb9ca17eed8e1993ad5fb57daa6f6a863e4bd3ab3dda0517b2
0 B / 3.05 KiB [-----------------------------------------------------------] 0s
Writing manifest to image destination
Storing signatures
Super duper!
A buid job is a simple-enough blurb of YAML that will get applied to the cluster in a familiar way.
The build jobs are basically specifying the parameters to the template that the job chooses.
In our example here is an example of building the Dockerfile
from the cri-o git repo.
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: crio-build
spec:
source:
git:
url: https://github.com/kubernetes-incubator/cri-o
revision: master
template:
name: buildah
arguments:
- name: DOCKERFILE
value: "./Dockerfile"
- name: BUILDER_IMAGE
value: "bananaboat.local:5000/buildah"
- name: IMAGE
value: "192.168.1.155:5000/cri-o" ## CHANGEMYIP
- name: TLSVERIFY
value: "false"
A nice gotcha is that the BUILDER_IMAGE
name is fetched and resolved by CRI on the host, but the IMAGE
name to push to is resolved inside the container runtime instance.
In this example I hard coded the ip address of the host's IP to get around it.
That is sloppy and sub-optimal.
Ideally you have a known local registry name, or a TLS trusted repo you can push to.
Using the example YAML above, let's try the build.
We'll write it to ~/build-crio.yaml
.
Next we'll kick off the build job.
export PATH=$GOPATH/src/k8s.io/kubernetes/_output/bin:$PATH
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
kubectl apply -f ~/build-crio.yaml
And you ought to see
build.build.knative.dev "crio-build" created
This build job is run in init containers.
> kubectl get po
NAME READY STATUS RESTARTS AGE
crio-build-ghmbg 0/1 Init:2/4 0 1m
> kubectl get --all-namespaces po
NAMESPACE NAME READY STATUS RESTARTS AGE
default crio-build-ghmbg 0/1 Init:2/4 0 2m
knative-build build-controller-7fc9d7fb64-vttsd 1/1 Running 0 1h
knative-build build-webhook-7cd67f7c5-m58dt 1/1 Running 0 1h
kube-system kube-dns-659bc9899c-fcglf 3/3 Running 0 1h
> kubectl get builds
NAME CREATED AT
crio-build 6m
Now you can describe the build:
> kubectl describe build crio-build
Name: crio-build
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"build.knative.dev/v1alpha1","kind":"Build","metadata":{"annotations":{},"name":"crio-build","namespace":"default"},"spec":{"source":{"gi...
API Version: build.knative.dev/v1alpha1
Kind: Build
Metadata:
Cluster Name:
Creation Timestamp: 2018-08-28T20:21:08Z
Generation: 1
Resource Version: 6025
Self Link: /apis/build.knative.dev/v1alpha1/namespaces/default/builds/crio-build
UID: e622b001-aaff-11e8-8c94-1c1b0d0ece5a
[...]
There is a helper in the knative/build repo that does decorated logs. It calls against the build job name (not the pod name).
go build -o $GOPATH/bin/build-logs github.com/knative/build/cmd/logs
build-logs crio-build
output like:
[build-step-credential-initializer] 2018/08/28 20:21:10 Set StringFlag zap-logger-config to default: .
[build-step-credential-initializer] 2018/08/28 20:21:10 Set StringFlag loglevel.creds-init to default: .
[build-step-credential-initializer] {"level":"error","ts":1535487670.1716776,"caller":"logging/config.go:42","msg":"Failed to parse the logging config. Falling back to default logger.","error":"empty logging configuration","build.knative.dev/jsonconfig":"","stacktrace":"github.com/knative/build/pkg/logging.NewLogger\n\t/home/vbatts/src/github.com/knative/build/pkg/logging/config.go:42\ngithub.com/knative/build/pkg/logging.NewLoggerFromDefaultConfigMap\n\t/home/vbatts/src/github.com/knative/build/pkg/logging/config.go:51\nmain.main\n\t/home/vbatts/src/github.com/knative/build/cmd/creds-init/main.go:29\nruntime.main\n\t/home/vbatts/go1.10/src/runtime/proc.go:198"}
[build-step-credential-initializer] {"level":"info","ts":1535487670.2099082,"logger":"creds-init","caller":"creds-init/main.go:38","msg":"Credentials initialized."}
[build-step-git-source] 2018/08/28 20:21:11 Set StringFlag zap-logger-config to default: .
[build-step-git-source] 2018/08/28 20:21:11 Set StringFlag loglevel.git-init to default: .
[build-step-git-source] {"level":"error","ts":1535487671.8334544,"caller":"logging/config.go:42","msg":"Failed to parse the logging config. Falling back to default logger.","error":"empty logging configuration","build.knative.dev/jsonconfig":"","stacktrace":"github.com/knative/build/pkg/logging.NewLogger\n\t/home/vbatts/src/github.com/knative/build/pkg/logging/config.go:42\ngithub.com/knative/build/pkg/logging.NewLoggerFromDefaultConfigMap\n\t/home/vbatts/src/github.com/knative/build/pkg/logging/config.go:51\nmain.main\n\t/home/vbatts/src/github.com/knative/build/cmd/git-init/main.go:56\nruntime.main\n\t/home/vbatts/go1.10/src/runtime/proc.go:198"}
[build-step-git-source] {"level":"info","ts":1535487674.141561,"logger":"git-init","caller":"git-init/main.go:74","msg":"Successfully cloned \"https://github.com/kubernetes-incubator/cri-o\" @ \"master\""}
[build-step-build] STEP 1: FROM golang:1.10
[build-step-build] Getting image source signatures
[build-step-build] Copying blob sha256:55cbf04beb7001d222c71bfdeae780bda19d5cb37b8dbd65ff0d3e6a0b9b74e6
[...]
[build-step-push] Writing manifest to image destination
[build-step-push] Storing signatures
[build-step-push]
Cool! This looks to have succeeded. Let's pull the image to see that it is now magically in the image name we pushed to.
> sudo crictl pull bananaboat.local:5000/cri-o
Image is update to date for bananaboat.local:5000/cri-o@sha256:844186ed43a46cb285599b8f19124816941708c118a2553f2068566157f71a26
Super duper! This means the job you applied to the cluster, fetched your project's git repo, built it from the Dockerfile in that repo, and pushed the image to a container registry for resuse.
There are several services running here. If you're running into issues you may have to look in several places.
cri-o logs:
sudo journalctl -lf -u crio.service
If kubernetes is having ImagePullBackoff then you may see more information on why in this log.
kubernetes logs:
tail -f /tmp/kubelet.log /tmp/kube-controller-manager.log /tmp/kube-proxy.log /tmp/kube-scheduler.log
There are other logs there, but are noisy and not as fruitful for discovering issues.
If you find yourself doing iterations and needing to get back to start, these are the environment variables you can paste for your terminals:
export GOPATH=$HOME/gopath
export GOROOT=$HOME/go
export PATH=$GOPATH/src/k8s.io/kubernetes/_output/bin:$GOPATH/src/k8s.io/kubernetes/third_party/etcd:$GOPATH/bin:$GOROOT/bin:$PATH
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
export KO_DOCKER_REPO="bananaboat.local:5000/farts"
ctrl+c
the port-forward terminal, and then:
kubectl delete deployment registry
If you're just iterating on the top layers you can just delete the build job
kubectl delete -f ~/build-job.yaml
From the knative/build
directory, you can drop the whole build CRD, which reaps the buildah build-template and jobs:
ko delete -f ./config/
If you you're stopping the whole kubernetes instance, just ctrl+c
(^c
) the terminal that has kubernetes running.
After that, it may likely have a few pods still running.
They'll need to be removed manually.
> sudo crictl ps
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
238b1f3c26127 8a7739f672b49db46e3a8d5cdf54df757b7559a00db9de210b4af1aa3397020f 3 days ago CONTAINER_RUNNING sidecar 0
96c75d30afb19 6816817d9dce03e877dc1e5535d6e0a5628e607c77cfc2bc588c2a1a1cd49ed4 3 days ago CONTAINER_RUNNING dnsmasq 0
0f63129507118 55ffe31ac5789896865300074e914ff3b699050663a096f757f8ecc1d71c8aad 3 days ago CONTAINER_RUNNING kubedns 0
> sudo crictl rm 238b1f3c26127 96c75d30afb19 0f63129507118
238b1f3c26127
96c75d30afb19
0f63129507118
> sudo crictl images
sudo crictl images | grep -E '(bananaboat.local|k8s)'
bananaboat.local:5000/buildah latest 16fbdc0e86049 855MB
bananaboat.local:5000/farts/controller-12e96bc762f32867760d3b6f51bdae1d <none> dcebeb2301cc9 56.1MB
bananaboat.local:5000/farts/creds-init-deae2f97eba24d722ddbbb5256fdab85 <none> cd8972d53d70c 2.91GB
bananaboat.local:5000/farts/git-init-edc40519d94eade2cc2c40d754b84067 <none> 2f3944a7597fd 2.9GB
bananaboat.local:5000/farts/webhook-98f665fe94975e40fa69f4b2dd6f58b4 <none> 6ea689a043fe3 55.6MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.10 6816817d9dce0 40.6MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.10 55ffe31ac5789 49.8MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.10 8a7739f672b49 41.9MB
> sudo crictl rmi 16fbdc0e86049 dcebeb2301cc9 cd8972d53d70c 2f3944a7597fd 6ea689a043fe3 6816817d9dce0 55ffe31ac5789 8a7739f672b49
The last mile of cleanup is removing directories. The dist-clean step would be like:
rm -rf /var/lib/containers
rm -rf /var/lib/kubelet
There are many moving parts here and obviously not suited for just running a build locally. Though as your infrastructure may have cluster resources like kubernetes/openshift, this provides a new access to using those resources for scheduling container builds.
future kubectl will let you follow output of all the containers in a pod, like:
but not in
kubectl
1.10.7 ...