https://github.com/ii/kubeadm-dind-cluster/tree/audit-policy
kubernetes-retired/kubeadm-dind-cluster#204
#git clone [email protected]:kubernetes-sigs/kubeadm-dind-cluster.git ~/dind
git clone [email protected]:ii/kubeadm-dind-cluster.git ~/dind
git checkout -b audit-policy origin/audit-policy
https://github.com/kubernetes-sigs/kubeadm-dind-cluster#using-with-kubernetes-source
docker rm -f $(docker ps -a -q)
docker rmi $(docker images -q)
cd ~/go/src/k8s.io/kubernetes
time ~/dind/dind-cluster.sh clean
cd ~/dind/
time ./build/build-local.sh
# cd ~/dind/ .
#/build/build-local.sh
cd ~/go/src/k8s.io/kubernetes
export DIND_IMAGE=mirantis/kubeadm-dind-cluster:local
export BUILD_KUBEADM=y
export BUILD_HYPERKUBE=y
time ~/dind/dind-cluster.sh up
cd ~/go/src/k8s.io/kubernetes
time ~/dind/dind-cluster.sh e2e
#'[Conformance]'
# '[Slow]|[Serial]|[Disruptive]|[Flaky]|[Feature:.+]'
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#permissive-rbac-permissions
[init] using Kubernetes version: v1.13.0 [preflight] running pre-flight checks [WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.13.0. Kubeadm version: 1.12.x [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist I0822 07:25:44.908039 669 kernel_validator.go:81] Validating kernel version I0822 07:25:44.908172 669 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.0.2] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [172.18.0.2 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Adding extra host path mount "audit-mount" to "kube-apiserver" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 20.002864 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation [bootstraptoken] using token: bz9yiz.0s2ofw0d6zhg00yq [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
*** 'kubeadm join --ignore-preflight-errors=all 172.18.0.2:6443 --token bz9yiz.0s2ofw0d6zhg00yq --discovery-token-ca-cert-hash sha256:608746551b2863ebfb865a4bc55d0305a99d3c614fbdf36fb81592242ff274a3' f ailed, doing kubeadm reset *** '/etc/cni' -> '/etc/cni.bak' '/etc/cni/net.d' -> '/etc/cni.bak/net.d' '/etc/cni/net.d/cni.conf' -> '/etc/cni.bak/net.d/cni.conf' [preflight] running pre-flight checks [reset] stopping the kubelet service [reset] unmounting mounted directories in "/var/lib/kubelet" [preflight] running pre-flight checks [reset] stopping the kubelet service [reset] no etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd [reset] please manually reset etcd to prevent further issues [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] unmounting mounted directories in "/var/lib/kubelet" [reset] no etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd [reset] please manually reset etcd to prevent further issues [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [preflight] running pre-flight checks [preflight] running pre-flight checks [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist I0822 07:28:20.286245 2856 kernel_validator.go:81] Validating kernel version I0822 07:28:20.286365 2856 kernel_validator.go:96] Validating kernel config [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist I0822 07:28:20.292405 2847 kernel_validator.go:81] Validating kernel version I0822 07:28:20.292559 2847 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "172.18.0.2:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.18.0.2:6443" [discovery] Trying to connect to API Server "172.18.0.2:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.18.0.2:6443" [discovery] Requesting info from "https://172.18.0.2:6443" again to validate TLS against the pinned public key [discovery] Requesting info from "https://172.18.0.2:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.0.2:6443" [discovery] Successfully established connection with API Server "172.18.0.2:6443" [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.0.2:6443" [discovery] Successfully established connection with API Server "172.18.0.2:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:bz9yiz" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no RBAC policy matched
Why does running e2e test with a focus on Conformance and skipping all the slow disruptive bits still run all 1032 specs?
*** Running e2e tests with args: --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:\.+\] --ginkgo.focus=\[Conformance\] --host=http://127.0.0.1:32882 +++ [0822 06:12:49] Verifying Prerequisites.... Cluster "dind" set. Context "dind" created. Switched to context "dind". 2018/08/22 06:12:51 e2e.go:158: Updating kubetest binary... 2018/08/22 06:13:27 e2e.go:79: Calling kubetest --verbose-commands=true --v 6 --test --check-version-skew=false --test_args=--ginkgo.noColor --num-nodes=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive \]|\[Flaky\]|\[Feature:\.+\] --ginkgo.focus=\[Conformance\] --host=http://127.0.0.1:32882... 2018/08/22 06:13:27 util.go:132: Please use kubetest --provider=dind (instead of deprecated KUBERNETES_PROVIDER=dind) 2018/08/22 06:13:27 main.go:1041: Please use kubetest --ginkgo-parallel (instead of deprecated GINKGO_PARALLEL=y) 2018/08/22 06:13:27 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh Skeleton Provider: prepare-e2e not implemented Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.0-alpha.0.383+229ecedac5084e", GitCommit:"229ecedac5084eba6e93973095cc7846893288da", GitTreeState:"clean", BuildDate:"2018-08-22T0 6:12:15Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.0-alpha.0.383+229ecedac5084e", GitCommit:"229ecedac5084eba6e93973095cc7846893288da", GitTreeState:"clean", BuildDate:"2018-08-22T0 5:56:34Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} 2018/08/22 06:13:27 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 147.661919ms 2018/08/22 06:13:27 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version 2018/08/22 06:13:27 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 134.763439ms 2018/08/22 06:13:27 process.go:153: Running: ./hack/ginkgo-e2e.sh --ginkgo.noColor --num-nodes=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:\.+\] --ginkgo.focus=\[Conformance\] --host=http://127.0.0.1:32882 Conformance test: not doing test setup. Running Suite: Kubernetes e2e suite =================================== Random Seed: 1534918408 - Will randomize all specs Will run 1032 specs Running in parallel across 25 nodes Ran 177 of 1032 Specs in 450.760 seconds SUCCESS! -- 177 Passed | 0 Failed | 0 Pending | 855 Skipped Ginkgo ran 1 suite in 7m31.369138928s Test Suite Passed 2018/08/22 06:20:59 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.noColor --num-nodes=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:\.+\] --ginkgo.focus=\[Conformance\] --host=http://127.0.0.1:32882' finished in 7m31.859376975s
docker rm $(docker ps -a -q) ; docker rmi $(docker images)
~/dind/dind-cluster.sh clean
cd ~/dind/
./build/build-local.sh
cd ~/go/src/k8s.io/kubernetes
~/dind/dind-cluster.sh up
It’s not doing what I would expect:
It might also make sense to embed the policy yaml as a sub thing within the kubeadm.yaml
making it just need an external file and not having to copy the policy file about.
kubekins - it’s possible to run tests - https://gist.github.com/dims/033cffa467107bcac8df21e7db72d528 (this uses local up cluster, but can run it without local up cluster too)
journalctl -xeu kubelet | grep kube-apiserver Aug 21 20:31:24 kube-master hyperkube[3100]: I0821 20:31:24.197218 3100 file.go:200] Reading config file "/etc/kubernetes/manifests/kube-apiserver.yaml" Aug 21 20:31:24 kube-master hyperkube[3100]: E0821 20:31:24.199095 3100 file.go:187] Can't process manifest file "/etc/kubernetes/manifests/kube-apiserver.yaml": invalid pod: [spec.volumes[5].name: Invalid value: "auditMount": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?') spec.containers[0].volumeMounts[5].name: Not found: "auditMount"] Aug 21 20:31:44 kube-master hyperkube[3100]: I0821 20:31:44.196965 3100 file.go:200] Reading config file "/etc/kubernetes/manifests/kube-apiserver.yaml" Aug 21 20:31:44 kube-master hyperkube[3100]: E0821 20:31:44.199154 3100 file.go:187] Can't process manifest file "/etc/kubernetes/manifests/kube-apiserver.yaml": invalid pod: [spec.volumes[5].name: Invalid value: "auditMount": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?') spec.containers[0].volumeMounts[5].name: Not found: "auditMount"] kubeadm init --config /etc/kubeadm.conf --ignore-preflight-errors=FileContent--proc-sys-net-bridge-bridge-nf-call-iptables kubeadm reset && kubeadm init --config /etc/kubeadm.conf --ignore-preflight-errors=all
docker exec kube-master cat /etc/kubernetes/manifests/kube-apiserver.yaml
docker exec kube-master docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b206593db042 b8df3b177be2 "etcd --advertise-..." 3 minutes ago Up 3 minutes k8s_etcd_etcd-kube-master_kube-system_78263d83ff9d8e4fa24f4ff1b321f5b4_0
03b2a5e2b035 23b6e5d23516 "kube-controller-m..." 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-kube-master_kube-system_49c60401cce7c9fefaa5362cd4a90d56_0
de97d38fa194 23b6e5d23516 "kube-scheduler --..." 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-kube-master_kube-system_3b695f958ffb31926f9f96a9389c1ef2_0
30c6a51b746f k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-kube-master_kube-system_49c60401cce7c9fefaa5362cd4a90d56_0
a6b6b07e1239 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-kube-master_kube-system_3b695f958ffb31926f9f96a9389c1ef2_0
aa40eb4b363e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-kube-master_kube-system_78263d83ff9d8e4fa24f4ff1b321f5b4_0
docker exec kube-master ps -auxwwwww
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.1 0.0 56740 6604 ? Ss 19:33 0:01 /sbin/dind_init systemd.setenv=CNI_PLUGIN=bridge systemd.setenv=IP_MODE=ipv4 systemd.setenv=POD_NET_PREFIX=10.244.1 systemd.setenv=POD_NET_SIZE=24 systemd.setenv=USE_HAIRPIN=false systemd.setenv=DNS_SVC_IP=10.96.0.10 systemd.setenv=DNS_SERVICE=kube-dns
root 19 0.6 0.0 87048 40424 ? Ss 19:33 0:06 /lib/systemd/systemd-journald
root 54 0.0 0.0 18040 3056 ? Ss 19:33 0:00 /bin/bash /usr/local/bin/dindnet
root 105 0.0 0.0 24560 3116 ? S 19:33 0:00 socat udp4-recvfrom:53,reuseaddr,fork,bind=172.18.0.2 UDP:127.0.0.11:53
root 256 2.9 0.0 2286508 66824 ? Ssl 19:33 0:30 /usr/bin/dockerd -H fd:// --storage-driver=overlay2 --storage-opt overlay2.override_kernel_check=true -g /dind/docker
root 279 0.2 0.0 1889144 15596 ? Ssl 19:33 0:02 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 230 0.0 0.0 18188 3112 ? Ss 19:33 0:00 /bin/bash /usr/local/bin/wrapkubeadm init --config /etc/kubeadm.conf --ignore-preflight-errors=all
root 7930 23.2 0.0 45380 30428 ? Sl 19:50 0:05 kubeadm init --config /etc/kubeadm.conf --ignore-preflight-errors=all
root 8403 1.1 0.0 10514488 16788 ? Ssl 19:51 0:00 etcd --advertise-client-urls=https://127.0.0.1:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://127.0.0.1:2380 --initial-cluster=kube-master=https://127.0.0.1:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379 --listen-peer-urls=https://127.0.0.1:2380 --name=kube-master --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
root 8194 10.0 0.0 2231064 104248 ? Ssl 19:50 0:01 /k8s/hyperkube kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --eviction-hard=memory.available<100Mi,nodefs.available<100Mi,nodefs.inodesFree<1000 --fail-swap-on=false --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --feature-gates=MountPropagation=true,DynamicKubeletConfig=true --v=4
root 8427 2.0 0.0 1064904 85836 ? Ssl 19:51 0:00 kube-controller-manager --feature-gates=MountPropagation=true,AdvancedAuditing=true --address=127.0.0.1 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
root 8451 3.0 0.0 1174336 85748 ? Ssl 19:51 0:00 kube-scheduler --feature-gates=MountPropagation=true,AdvancedAuditing=true --address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
root 8287 0.0 0.0 347840 3572 ? Sl 19:51 0:00 docker-containerd-shim fed63ec2b0cd8d3b24c490c3145efe293347b77e46b6db33da589886a532b969 /var/run/docker/libcontainerd/fed63ec2b0cd8d3b24c490c3145efe293347b77e46b6db33da589886a532b969 docker-runc
root 8310 0.0 0.0 478912 3556 ? Sl 19:51 0:00 docker-containerd-shim 1ae9336514f45307e6efb714a9fc661833791c5b4c76eb4f8d39cf63fa8d5651 /var/run/docker/libcontainerd/1ae9336514f45307e6efb714a9fc661833791c5b4c76eb4f8d39cf63fa8d5651 docker-runc
root 8320 0.0 0.0 282304 3680 ? Sl 19:51 0:00 docker-containerd-shim b75d6981e4f3136943110497b8f3152007093791efa1482b779a60bb468e1b3d /var/run/docker/libcontainerd/b75d6981e4f3136943110497b8f3152007093791efa1482b779a60bb468e1b3d docker-runc
root 8386 0.0 0.0 413376 3620 ? Sl 19:51 0:00 docker-containerd-shim e5a200824f3d7626c35e9542b676a36d40b91fe50ab02f23fef1329469d2aa73 /var/run/docker/libcontainerd/e5a200824f3d7626c35e9542b676a36d40b91fe50ab02f23fef1329469d2aa73 docker-runc
root 8409 0.0 0.0 282304 3808 ? Sl 19:51 0:00 docker-containerd-shim 577a958ddf532c3fd61e96d078d1ad687d8e6db74699773a0b568e4b1f28d077 /var/run/docker/libcontainerd/577a958ddf532c3fd61e96d078d1ad687d8e6db74699773a0b568e4b1f28d077 docker-runc
root 8433 0.1 0.0 348096 3676 ? Sl 19:51 0:00 docker-containerd-shim 2d26e9e4e0cecc62adb2c55362ce61449ce049847101b754a091236994a3cb5d /var/run/docker/libcontainerd/2d26e9e4e0cecc62adb2c55362ce61449ce049847101b754a091236994a3cb5d docker-runc
root 8304 0.0 0.0 1020 4 ? Ss 19:51 0:00 /pause
root 8338 0.1 0.0 1020 4 ? Ss 19:51 0:00 /pause
root 8352 0.0 0.0 1020 4 ? Ss 19:51 0:00 /pause
docker exec kube-master kubeadm config view --kubeconfig /etc/kubernetes/admin.conf
api:
advertiseAddress: 172.18.0.2
bindPort: 6443
controlPlaneEndpoint: ""
apiServerExtraArgs:
authorization-mode: Node,RBAC
feature-gates: MountPropagation=true,AdvancedAuditing=true
insecure-bind-address: 0.0.0.0
insecure-port: "8080"
apiVersion: kubeadm.k8s.io/v1alpha3
auditPolicy:
logDir: /etc/kubernetes/audit/
logMaxAge: 2
path: /etc/kubernetes/audit-policy.yaml
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManagerExtraArgs:
feature-gates: MountPropagation=true,AdvancedAuditing=true
etcd:
local:
dataDir: /var/lib/etcd
image: ""
featureGates:
CoreDNS: false
imageRepository: k8s.gcr.io
kind: InitConfiguration
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeRegistration: {}
schedulerExtraArgs:
feature-gates: MountPropagation=true,AdvancedAuditing=true
unifiedControlPlaneImage: mirantis/hypokube:final
APISERVER=$(docker exec kube-master \
docker ps --format '{{.Names}}' \
--filter label=io.kubernetes.container.name=kube-apiserver)
docker exec kube-master \
docker inspect $APISERVER \
| jq .[0].Args
docker exec kube-master kubeadm config print-defaults
@hh You run build/build-local.sh and then set DIND_IMAGE to use that locally built docker image for k-d-c (export DIND_IMAGE=mirantis/kubeadm-dind-cluster:local).
Leigh Capili <[email protected]> @hh, use `apiServerExtraVolumes` for kubeadm section of the volume mounts it’s an array of HostPathMounts which you can specify as writeable: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#HostPathMount logging some fixes:
- add `pathType: DirectoryOrCreate` to the kubeadm config
- change `name: auditMount` to `name: audit-mount` (kubelet journal shows volume was failing DNS name validation)
note: kubeadm config does not properly validate volume names – we should fix this
docker exec -ti kube-master /bin/bash
export APISERVER=$(docker ps --filter label=io.kubernetes.container.name=kube-apiserver --format '{{.Names}}')
export PS1='# MASTER \$ '
export APISERVER=$(docker ps -a --filter label=io.kubernetes.container.name=kube-apiserver --format '{{.Names}}')
docker logs $APISERVER
# cat /etc/kubeadm.conf
# #
journalctl -xeu kubelet | grep kube-apiserver
#docker ps | grep -v pause\\\|dns\\\|etcd
#docker inspect $APISERVER | jq .[0].Args
#MASTER=$(docker ps --filter label=mirantis.kubeadm_dind_cluster --format "{{.Names}}")
docker exec -ti kube-master /bin/bash
APISERVER=$(docker ps --filter label=io.kubernetes.container.name=kube-apiserver --format '{{.Names}}')
docker exec -ti $APISERVER /bin/bash
export PS1='# APISERVER \$ '
#docker logs $APISERVER
clear
ps axuwww | grep apiserver
# from docker logs on apiserver invalid argument "MountPropagation=true,Auditing=true" for "--feature-gates" flag: unrecognized key: Auditing