Skip to content

Instantly share code, notes, and snippets.

View jayunit100's full-sized avatar
🎯
Focusing

jay vyas jayunit100

🎯
Focusing
View GitHub Profile
E0507 17:09:21.380711 1 controller.go:230] cert-manager/controller/webhook-bootstrap/webhook-bootstrap/ca-secret "msg"="error decoding CA private key" "error"="error decoding private key PEM block" "resource_kind"="Secret" "resource_name"="cert-manager-webhook-tls" "resource_namespace"="cert-manager"
E0507 17:09:21.380738 1 controller.go:131] cert-manager/controller/webhook-bootstrap "msg"="re-queuing item due to error processing" "error"="error decoding private key PEM block" "key"="cert-manager/cert-manager-webhook-tls"
E0507 17:09:43.615523 1 pki.go:128] cert-manager/controller/certificates "msg"="error decoding x509 certificate" "error"="error decoding cert PEM block" "related_resource_kind"="Secret" "related_resource_name"="capi-webhook-service-cert" "related_resource_namespace"="capi-webhook-system" "resource_kind"="Certificate" "resource_name"="capi-serving-cert" "resource_namespace"="capi-webhook-system" "secret_key"="tls.crt"
E0507 17:09:43.790546 1 pki.go:128] cert-manage
Credentials of workload cluster smoke-test-1 have been saved
You can now access the cluster by switching the context to smoke-test-1-admin@smoke-test-1 under /home/ubuntu/.kube/config
Switched to context "smoke-test-1-admin@smoke-test-1".
PLUGIN STATUS RESULT COUNT
e2e failed failed 1
systemd-logs complete passed 6
Sonobuoy has completed. Use `sonobuoy retrieve` to get results.
Credentials of workload cluster smoke-test-10 have been saved
You can now access the cluster by switching the context to smoke-test-10-admin@smoke-test-10 under /home/ubuntu/.kube/config
~ » kubectl get machines | grep "smoke-test-1-" 130 ↵ ubuntu@ubuntu
smoke-test-1-28ztk vsphere://4230c2a0-32a6-3a03-13a8-ad27cc01ffef Failed
smoke-test-1-md-0-6ddbcf577b-4mvr4 vsphere://42308e20-d8b1-c603-df06-d6f0005367d7 Running
smoke-test-1-md-0-6ddbcf577b-dbh9n vsphere://4230d5f3-240e-ee7d-eef1-45fd3ffcee50 Running
smoke-test-1-md-0-6ddbcf577b-r5mmk vsphere://42308c5c-8a09-15bb-277c-e772b060a266 Running
smoke-test-1-rgwj8 vsphere://42302040-4cc1-f1ee-35b2-2fcd144b420d Running
smoke-test-1-v2ppc vsphere://4230700e-e736-9663-a0c3-f02a208fb1da Running
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This file has been truncated, but you can view the full file.
I0403 16:22:32.747311 1 vspherecluster_controller.go:243] capv-controller-manager/vspherecluster-controller/default/smoke-test-19 "msg"="Reconciling VSphereCluster"
I0403 16:22:32.747519 1 vspherecluster_controller.go:309] capv-controller-manager/vspherecluster-controller/default/smoke-test-19 "msg"="skipping load balancer reconciliation" "controlPlaneEndpoint"="192.168.3.227:6443" "reason"="Cluster.Spec.ControlPlaneEndpoint is already set"
I0403 16:22:32.747594 1 vspherecluster_controller.go:453] capv-controller-manager/vspherecluster-controller/default/smoke-test-19 "msg"="skipping control plane endpoint reconciliation" "controlPlaneEndpoint"="192.168.3.227:6443" "reason"="ControlPlaneEndpoint already set on Cluster"
I0403 16:22:32.747748 1 vspherecluster_controller.go:531] capv-controller-manager/vspherecluster-controller/default/smoke-test-19 "msg"="skipping reconcile when API server is online" "reason"="controlPlaneInitialized"
I0403 16:22:32.747315 1 vspherecluster_con
2020-04-01T09:18:15.047165537Z stderr F [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-01T09:18:15.047217521Z stderr F 2020-04-01 09:18:15.046377 I | etcdmain: etcd Version: 3.4.3
2020-04-01T09:18:15.047225123Z stderr F 2020-04-01 09:18:15.046452 I | etcdmain: Git SHA: GitNotFound
2020-04-01T09:18:15.047230068Z stderr F 2020-04-01 09:18:15.046455 I | etcdmain: Go Version: go1.13.6
2020-04-01T09:18:15.047234779Z stderr F 2020-04-01 09:18:15.046458 I | etcdmain: Go OS/Arch: linux/amd64
2020-04-01T09:18:15.04723933Z stderr F 2020-04-01 09:18:15.046462 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-04-01T09:18:15.047247007Z stderr F 2020-04-01 09:18:15.046592 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2020-04-01T09:18:15.047360345Z stderr F [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-01T09:18:15.048031815Z stderr F 2020-04-0
@jayunit100
jayunit100 / gist:694d18fdd03837ee98d592af9c170e9a
Last active August 6, 2020 12:08
hacking metrics into kubeadm
10226 # 1) import components-base into the kubeadm package
10227 # 2) add a few metrics as is done in vi pkg/kubelet/metrics/metrics.go
10228 # 3) run "make kubeadm"
10229 # 4) run "kubeadm init" on your mac
10230 # 5) like this .//_output/local/go/bin/kubeadm init
diff --git a/cmd/kubeadm/app/kubeadm.go b/cmd/kubeadm/app/kubeadm.go
index 1842cfd8cb4..0ab66a1cadb 100644
--- a/cmd/kubeadm/app/kubeadm.go
I0326 20:05:42.135640 1 main.go:197] entrypoint "msg"="creating controller manager"
I0326 20:05:42.995782 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0326 20:05:42.997105 1 main.go:208] entrypoint "msg"="starting controller manager"
I0326 20:05:42.997156 1 leaderelection.go:242] attempting to acquire leader lease capv-system/capv-controller-manager-runtime...
I0326 20:05:42.997243 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0326 20:05:43.010971 1 leaderelection.go:252] successfully acquired lease capv-system/capv-controller-manager-runtime
I0326 20:05:43.011387 1 recorder.go:52] controller-runtime/manager/events "msg"="Normal" "message"="capv-controller-manager-9c7f47c89-swzws_430f773a-a43a-44fa-89a1-970b19fc4e67 became leader" "object"={"kind":"ConfigMap","namespace":"capv-system","name":"capv-controller-manager-runtime","uid":"dbc8622c-
I0325 18:18:50.446446 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0325 18:18:50.447413 1 main.go:231] setup "msg"="starting manager"
I0325 18:18:50.447784 1 leaderelection.go:242] attempting to acquire leader lease capa-system/controller-leader-election-capa...
I0325 18:18:50.447900 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0325 18:18:50.460051 1 leaderelection.go:252] successfully acquired lease capa-system/controller-leader-election-capa
I0325 18:18:50.461612 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="awscluster" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"networkSpec":{"vpc":{}},"controlPlaneEndpoint":{"host":"","port":0},"bastion":{"enabled":false}},"status":{"ready":false,"network":{"apiServerElb":{"attributes":{}}}}}}
I0325 18:18:50.461471 1 controller.go:164] c
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
name: prometheus
spec:
selector:
message: 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
message: Certificate fetched from issuer successfully
message: Certificate issued successfully
message: Container image "gcr.io/k8s-prow-builds/cluster-api-aws-controller-amd64:dev" already present on machine
message: Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1" already present on machine
message: Container image "k8s.gcr.io/coredns:1.6.2" already present on machine
message: Container image "k8s.gcr.io/etcd:3.3.15-0" already present on machine
message: Container image "k8s.gcr.io/kube-apiserver:v1.16.4" already present on machine
message: Container image "k8s.gcr.io/kube-controller-manager:v1.16.4" already present on machine
message: Container image "k8s.gcr.io/kube-proxy:v1.16.4" already present on machine