Created
January 14, 2020 17:48
-
-
Save kameshsampath/8e68c35f4d80e8376534eda149a56852 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
==> Docker <== | |
-- Logs begin at Tue 2020-01-14 17:41:32 UTC, end at Tue 2020-01-14 17:44:33 UTC. -- | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.097921453Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d6fbda74600f34c8f7c8bfa384a659e8a369aa4c00cf650ae89acaa262e7d00/shim.sock" debug=false pid=6980 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.314972238Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0adb09986eb7bc84c540bcc948cb0cc891a7f63b1ca93c99810401c85f1a2cb1/shim.sock" debug=false pid=7114 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.322412868Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3a3eb1723c45efa042e707be766b87322620c47ecba3c1043d615f55a1c346cb/shim.sock" debug=false pid=7145 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.446113394Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f4ed9a00d42eebf37c48a492fa62270cd03eb9907202cfcd100e41d8f454eac7/shim.sock" debug=false pid=7278 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.448597369Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b75a3e808f55e9e9d88640979be294174cc50a723dce4c9ac2ada74fe1de031e/shim.sock" debug=false pid=7262 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.508455008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/76d14f40ccf57e48ae575c0aa184f10a91dd7d3ea6af9d40eb06aa82b8634c45/shim.sock" debug=false pid=7306 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.602382963Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/010ab3de3d8f17589af005763515c35f59ba9da873cb284ac8dd4ab6742a9f50/shim.sock" debug=false pid=7487 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.613106400Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6c7519833bc6349d0b570526b6c1d13c8e1b4010d42fe028cc522b8ad8f41169/shim.sock" debug=false pid=7495 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.667605168Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea47f3adadd713cc471f3f97c3cbdd1bbac6c266fc1276770109c40dc1724070/shim.sock" debug=false pid=7533 | |
Jan 14 17:42:20 knative dockerd[2006]: time="2020-01-14T17:42:20.891921682Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a9f865f50a5b5570d3949d0be2a77ddbdfb7a04c4e97db15c7211a89744c5fb0/shim.sock" debug=false pid=7723 | |
Jan 14 17:42:21 knative dockerd[2006]: time="2020-01-14T17:42:21.278978936Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4f26d55f827b0f60c5004f8a8c632797c02c55ede6637efcaa807e6f62c48a7e/shim.sock" debug=false pid=7883 | |
Jan 14 17:42:21 knative dockerd[2006]: time="2020-01-14T17:42:21.298711098Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7a499190b530b0ff838ebb5dcd9a9174b1d741936c90c4cacd9a899fdf07c551/shim.sock" debug=false pid=7893 | |
Jan 14 17:42:21 knative dockerd[2006]: time="2020-01-14T17:42:21.361991128Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e066213f08a3018ab4639b07fdc49ff2dc079e092583d84fb44904bac5bb6fc6/shim.sock" debug=false pid=7922 | |
Jan 14 17:42:21 knative dockerd[2006]: time="2020-01-14T17:42:21.498714604Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a15a88ccbe907fdef78d557730457ce588b6887efaa6eafbd5eb3b62312f2f23/shim.sock" debug=false pid=7973 | |
Jan 14 17:42:22 knative dockerd[2006]: time="2020-01-14T17:42:22.829733958Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a589755cd8334b9439a9112cfa3a2fb8b0eacd6e122ca1a2aebe802ac6f63a3c/shim.sock" debug=false pid=8494 | |
Jan 14 17:42:23 knative dockerd[2006]: time="2020-01-14T17:42:23.266890130Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9c35513307ffb3865eb7799d4a9c0d6ec61f9457a278e91fc3f98cdd201ff635/shim.sock" debug=false pid=8653 | |
Jan 14 17:42:39 knative dockerd[2006]: time="2020-01-14T17:42:39.211763411Z" level=info msg="shim reaped" id=ea47f3adadd713cc471f3f97c3cbdd1bbac6c266fc1276770109c40dc1724070 | |
Jan 14 17:42:39 knative dockerd[2006]: time="2020-01-14T17:42:39.222545518Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:39 knative dockerd[2006]: time="2020-01-14T17:42:39.222757328Z" level=warning msg="ea47f3adadd713cc471f3f97c3cbdd1bbac6c266fc1276770109c40dc1724070 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ea47f3adadd713cc471f3f97c3cbdd1bbac6c266fc1276770109c40dc1724070/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:40 knative dockerd[2006]: time="2020-01-14T17:42:40.956066920Z" level=info msg="shim reaped" id=8d96b82185da6e4476dc51a87da5b39840ffe946e5423bab36d671abe2b1716f | |
Jan 14 17:42:40 knative dockerd[2006]: time="2020-01-14T17:42:40.966387313Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:40 knative dockerd[2006]: time="2020-01-14T17:42:40.966513759Z" level=warning msg="8d96b82185da6e4476dc51a87da5b39840ffe946e5423bab36d671abe2b1716f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8d96b82185da6e4476dc51a87da5b39840ffe946e5423bab36d671abe2b1716f/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:48 knative dockerd[2006]: time="2020-01-14T17:42:48.319754806Z" level=info msg="shim reaped" id=be8626d312e349d0f80c6232cedbd56637c1919756a80ca84ae00784330c18cb | |
Jan 14 17:42:48 knative dockerd[2006]: time="2020-01-14T17:42:48.329824105Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:48 knative dockerd[2006]: time="2020-01-14T17:42:48.331535284Z" level=warning msg="be8626d312e349d0f80c6232cedbd56637c1919756a80ca84ae00784330c18cb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/be8626d312e349d0f80c6232cedbd56637c1919756a80ca84ae00784330c18cb/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:48 knative dockerd[2006]: time="2020-01-14T17:42:48.430035966Z" level=info msg="shim reaped" id=2357d1a66d0dc984bf027792b746e9130a75179db970de2e64f32867d0ed391e | |
Jan 14 17:42:48 knative dockerd[2006]: time="2020-01-14T17:42:48.440603520Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:48 knative dockerd[2006]: time="2020-01-14T17:42:48.440803723Z" level=warning msg="2357d1a66d0dc984bf027792b746e9130a75179db970de2e64f32867d0ed391e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2357d1a66d0dc984bf027792b746e9130a75179db970de2e64f32867d0ed391e/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:50 knative dockerd[2006]: time="2020-01-14T17:42:50.996707242Z" level=info msg="shim reaped" id=bae6be4055702b8c869619c64e884707beb4c1091ebb2233ea235c5b5d48e4fd | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.006933430Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.007030522Z" level=warning msg="bae6be4055702b8c869619c64e884707beb4c1091ebb2233ea235c5b5d48e4fd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bae6be4055702b8c869619c64e884707beb4c1091ebb2233ea235c5b5d48e4fd/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.437637268Z" level=info msg="shim reaped" id=7d6c844255f8c00528c6ff3fe93c13e53ebef1502bc43603fbba2121b5144fee | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.448369747Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.448596095Z" level=warning msg="7d6c844255f8c00528c6ff3fe93c13e53ebef1502bc43603fbba2121b5144fee cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7d6c844255f8c00528c6ff3fe93c13e53ebef1502bc43603fbba2121b5144fee/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.450153314Z" level=info msg="shim reaped" id=4f26d55f827b0f60c5004f8a8c632797c02c55ede6637efcaa807e6f62c48a7e | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.458482920Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.458599682Z" level=warning msg="4f26d55f827b0f60c5004f8a8c632797c02c55ede6637efcaa807e6f62c48a7e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4f26d55f827b0f60c5004f8a8c632797c02c55ede6637efcaa807e6f62c48a7e/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.540129419Z" level=info msg="shim reaped" id=f82d7a6688532d046ece986b28fa0adcb2f813269d9098688279db659909d506 | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.550375851Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:51 knative dockerd[2006]: time="2020-01-14T17:42:51.550485669Z" level=warning msg="f82d7a6688532d046ece986b28fa0adcb2f813269d9098688279db659909d506 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f82d7a6688532d046ece986b28fa0adcb2f813269d9098688279db659909d506/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:52 knative dockerd[2006]: time="2020-01-14T17:42:52.610194076Z" level=info msg="shim reaped" id=18cb83393b3ec31c041de13242147f0f55a1afb9b39672daa8117eef1d655ace | |
Jan 14 17:42:52 knative dockerd[2006]: time="2020-01-14T17:42:52.618702547Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:52 knative dockerd[2006]: time="2020-01-14T17:42:52.618858376Z" level=warning msg="18cb83393b3ec31c041de13242147f0f55a1afb9b39672daa8117eef1d655ace cleanup: failed to unmount IPC: umount /var/lib/docker/containers/18cb83393b3ec31c041de13242147f0f55a1afb9b39672daa8117eef1d655ace/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:52 knative dockerd[2006]: time="2020-01-14T17:42:52.704774485Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/42a67545add649236d851bd7597e4133c5f264d815dcd4d22fc26a5886fc53e5/shim.sock" debug=false pid=11645 | |
Jan 14 17:42:58 knative dockerd[2006]: time="2020-01-14T17:42:58.049495007Z" level=info msg="shim reaped" id=3b45ec1ecfb157ec90484a7fee2580a8be6c9519b150acc6b03ffe9274087dde | |
Jan 14 17:42:58 knative dockerd[2006]: time="2020-01-14T17:42:58.059849610Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:42:58 knative dockerd[2006]: time="2020-01-14T17:42:58.060176099Z" level=warning msg="3b45ec1ecfb157ec90484a7fee2580a8be6c9519b150acc6b03ffe9274087dde cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3b45ec1ecfb157ec90484a7fee2580a8be6c9519b150acc6b03ffe9274087dde/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:42:58 knative dockerd[2006]: time="2020-01-14T17:42:58.150280461Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a2f7aceade69504e0e2920a6bd210e2c546ed74b0996e87da2e7c562de496c9c/shim.sock" debug=false pid=12224 | |
Jan 14 17:42:58 knative dockerd[2006]: time="2020-01-14T17:42:58.343835695Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6f8f5ee1561fa6c8507c34add13633a08c313cdbb46db63fcd2520a892812eab/shim.sock" debug=false pid=12282 | |
Jan 14 17:42:59 knative dockerd[2006]: time="2020-01-14T17:42:59.894972077Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/714560dcc9c05b0983e6c52818f0fdcc8a3f65adfb2c117daf2eeca8540ec10a/shim.sock" debug=false pid=12378 | |
Jan 14 17:43:00 knative dockerd[2006]: time="2020-01-14T17:43:00.957783429Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/563ca75161576886b081f39c0303cf9507ff7053dcb03be71234aad95c72541d/shim.sock" debug=false pid=12610 | |
Jan 14 17:43:02 knative dockerd[2006]: time="2020-01-14T17:43:02.847899443Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0699480a0da36b33d595b7a1e891a332de7bbd704ab797c57e34b02cc832c17/shim.sock" debug=false pid=13412 | |
Jan 14 17:43:03 knative dockerd[2006]: time="2020-01-14T17:43:03.875867685Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c126cd1ff8e50ab9d3bf7b977047356ad9499840059aa327da45865aa83f4806/shim.sock" debug=false pid=13661 | |
Jan 14 17:43:04 knative dockerd[2006]: time="2020-01-14T17:43:04.823177380Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d6c9473626bffdcad6032c937ab87810978696d126c5d5459e1614f0b4151afe/shim.sock" debug=false pid=13725 | |
Jan 14 17:43:05 knative dockerd[2006]: time="2020-01-14T17:43:05.872379952Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9e544ec3e58bc484b251902fa9df3560102736439bc92004166d2ba367fd5709/shim.sock" debug=false pid=13793 | |
Jan 14 17:43:07 knative dockerd[2006]: time="2020-01-14T17:43:07.839093949Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9253397be11b9b7d475420c44c8e5d782cc8bfbd73d9e69139fc086a59525237/shim.sock" debug=false pid=14144 | |
Jan 14 17:43:13 knative dockerd[2006]: time="2020-01-14T17:43:13.104241248Z" level=info msg="shim reaped" id=9253397be11b9b7d475420c44c8e5d782cc8bfbd73d9e69139fc086a59525237 | |
Jan 14 17:43:13 knative dockerd[2006]: time="2020-01-14T17:43:13.112307267Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jan 14 17:43:13 knative dockerd[2006]: time="2020-01-14T17:43:13.112422292Z" level=warning msg="9253397be11b9b7d475420c44c8e5d782cc8bfbd73d9e69139fc086a59525237 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9253397be11b9b7d475420c44c8e5d782cc8bfbd73d9e69139fc086a59525237/mounts/shm, flags: 0x2: no such file or directory" | |
Jan 14 17:43:39 knative dockerd[2006]: time="2020-01-14T17:43:39.823708589Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a386a7c5c7ee06311bd9baa9dc883300fe3dcd3b9a9e85f9cef499d4859f7877/shim.sock" debug=false pid=17252 | |
==> container status <== | |
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID | |
a386a7c5c7ee0 497f3a46b9958 54 seconds ago Running dispatcher 3 e63b4407cca9d | |
9253397be11b9 497f3a46b9958 About a minute ago Exited dispatcher 2 e63b4407cca9d | |
9e544ec3e58bc 250137949d942 About a minute ago Running dispatcher 2 ed7e40f79dd79 | |
d6c9473626bff 989a6a677c198 About a minute ago Running autoscaler-hpa 2 a938b2ee9af7d | |
c126cd1ff8e50 200e25c692324 About a minute ago Running eventing-controller 2 8986a3c07c715 | |
f0699480a0da3 4689081edb103 About a minute ago Running storage-provisioner 3 5fd2d5135019a | |
563ca75161576 cd8e75902f715 About a minute ago Running controller 2 0b25249c9f9f2 | |
714560dcc9c05 5387eae023d64 About a minute ago Running kafka 2 edcdc83b6c3ba | |
6f8f5ee1561fa 0ce50ba614f4c About a minute ago Running user-operator 2 62edbd6d96ec8 | |
a2f7aceade695 0ce50ba614f4c About a minute ago Running topic-operator 2 62edbd6d96ec8 | |
42a67545add64 fb198be4d6134 About a minute ago Running activator 3 a9ab90c159d3d | |
9c35513307ffb 5387eae023d64 2 minutes ago Running tls-sidecar 1 62edbd6d96ec8 | |
a589755cd8334 5387eae023d64 2 minutes ago Running tls-sidecar 1 edcdc83b6c3ba | |
a15a88ccbe907 434cf27801372 2 minutes ago Running networking-istio 1 aabf4b916cfbc | |
e066213f08a30 559ee0fd2eeed 2 minutes ago Running kafka-webhook 1 adec3c6384f1f | |
7a499190b530b c963f785564ed 2 minutes ago Running filter 1 e6e07c4b4ebd4 | |
6d6fbda74600f 40efaaa99e4c8 2 minutes ago Running webhook 1 6ab340dfd5666 | |
ea47f3adadd71 0ce50ba614f4c 2 minutes ago Exited user-operator 1 62edbd6d96ec8 | |
6c7519833bc63 7d4ef46d90cc9 2 minutes ago Running ingress 1 9fbbda61a9ee6 | |
010ab3de3d8f1 eb516548c180f 2 minutes ago Running coredns 1 f11e19568f049 | |
a9f865f50a5b5 7deacf5dfe8ce 2 minutes ago Running autoscaler 1 3676036283fd0 | |
76d14f40ccf57 9b2002b97cf25 2 minutes ago Running controller 1 5d2b5c7eb1d0f | |
f4ed9a00d42ee 34c6d612f402e 2 minutes ago Running manager 1 bf76a74637963 | |
3a3eb1723c45e c2a449c9f8344 2 minutes ago Running registry 1 fa5db17d02c0b | |
0adb09986eb7b 5e6171c97cac3 2 minutes ago Running manager 1 ab08166c1935d | |
b75a3e808f55e 5387eae023d64 2 minutes ago Running tls-sidecar 1 cf01733f5b4b6 | |
b4f8699af7f90 fc7708872568d 2 minutes ago Running eventing-webhook 1 6b537968149ad | |
77c6e7f8013b2 7b1ea899c41ff 2 minutes ago Running discovery 1 2cf59b1abe3ee | |
e45394dc14017 867903962419b 2 minutes ago Running controller 1 5b9ff3fd5284a | |
18cb83393b3ec fb198be4d6134 2 minutes ago Exited activator 2 a9ab90c159d3d | |
e12b7aa588437 3a40dfc6c327e 2 minutes ago Running controller 1 7ceed88d144a9 | |
224a1f61cf72a 0ce50ba614f4c 2 minutes ago Running strimzi-cluster-operator 1 beea8dad274cf | |
76244d25b5902 50b86c1a22337 2 minutes ago Running istio-proxy 1 ca786bc681035 | |
bae6be4055702 200e25c692324 2 minutes ago Exited eventing-controller 1 8986a3c07c715 | |
99d2afdb3987d 230ef35b6bb8e 2 minutes ago Running kube-proxy 1 155539dc7cebc | |
f82d7a6688532 250137949d942 2 minutes ago Exited dispatcher 1 ed7e40f79dd79 | |
7d6c844255f8c 989a6a677c198 2 minutes ago Exited autoscaler-hpa 1 a938b2ee9af7d | |
e7543d5fdf014 50b86c1a22337 2 minutes ago Running istio-proxy 1 f566c83cd63e4 | |
2ccc1d297f621 5387eae023d64 2 minutes ago Running zookeeper 1 cf01733f5b4b6 | |
3b45ec1ecfb15 0ce50ba614f4c 2 minutes ago Exited topic-operator 1 62edbd6d96ec8 | |
8aad0227042e2 60dc18151daf8 2 minutes ago Running registry-proxy 1 a32e0607310b8 | |
47d2a93292e68 eb516548c180f 2 minutes ago Running coredns 1 985bd761368db | |
be8626d312e34 cd8e75902f715 2 minutes ago Exited controller 1 0b25249c9f9f2 | |
2357d1a66d0dc 4689081edb103 2 minutes ago Exited storage-provisioner 2 5fd2d5135019a | |
7cb119b26b564 2c4adeb21b4ff 2 minutes ago Running etcd 1 b50ea4201d3fa | |
f1034fa99854c 02d90e9441623 2 minutes ago Running kube-controller-manager 1 cbcfabf04594d | |
7d3fd053187fa 72c01550199f8 2 minutes ago Running kube-scheduler 1 60eeea43b949b | |
06771fafbf2cb 364c383af37c1 2 minutes ago Running kube-apiserver 0 7208aa1ba87f0 | |
40e89b325ff8e bd12a212f9dcb 2 minutes ago Running kube-addon-manager 1 81ccbfc8b0419 | |
20cf2aadacf47 gcr.io/knative-releases/knative.dev/eventing/cmd/broker/ingress@sha256:0f671b2c3f6ea952cb314b7e7d7ec929702c41c47f59cce1044cf7daa6212d2c 24 minutes ago Exited ingress 0 274ce6238e1ca | |
6b48399bcf38d gcr.io/knative-releases/knative.dev/eventing/cmd/broker/filter@sha256:4cde6893d8763c1c8c52625338d698d5bf6857cf2c37e8e187c5d5a84d75514d 25 minutes ago Exited filter 0 d357131a8831c | |
793f3bb8145d3 gcr.io/knative-releases/knative.dev/eventing-contrib/kafka/channel/cmd/webhook@sha256:f5147ff4ecb0b3feaf04afcd923257548b12ce9550fcfef52268cff3d0fdcd9f 25 minutes ago Exited kafka-webhook 0 b1f4da843d9ae | |
2c1fb5a02d53b gcr.io/knative-releases/knative.dev/eventing-contrib/camel/source/cmd/controller@sha256:6830c4e6623acdf73b786f983f6ddc423ab0f290291d4573ad72ca2df3dec567 25 minutes ago Exited manager 0 5605e974c5470 | |
4df090c9876cd gcr.io/knative-releases/knative.dev/eventing-contrib/kafka/source/cmd/controller@sha256:ffc6ed14e766c2e74d0ba6362c6b30e3f64e06198958b22aee7d815870a56fd5 25 minutes ago Exited manager 0 0b3ef6546c2bc | |
a65365be03e4a gcr.io/knative-releases/knative.dev/eventing/cmd/in_memory/channel_controller@sha256:67cf35921e6ba4d8d5027637bdb9f0bec328e0ba5706fb0ea4eb32187a77bd0b 26 minutes ago Exited controller 0 9ea68f3de1460 | |
1b2b2cbb6587a gcr.io/knative-releases/knative.dev/eventing/cmd/webhook@sha256:75b2dfaaf279b98c2e90b02414b2255aebbc58b23beeba838feba57b09da12b6 26 minutes ago Exited eventing-webhook 0 3c4f1ff8d448c | |
5b8ba935b21a7 gcr.io/knative-releases/knative.dev/eventing/cmd/sources_controller@sha256:0df4cfcf82998eccf687a08a456f60578190e68175a441bcd3c26de7a4869739 26 minutes ago Exited controller 0 cad39c186c53b | |
d915d19cee617 gcr.io/knative-releases/knative.dev/serving/cmd/webhook@sha256:b1f72e974058576faf5f62d984d2ce4edb18b12ae4a4cd673d3fffe7d7706837 27 minutes ago Exited webhook 0 ca6cc199b6c97 | |
3260ea1788dd7 gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler@sha256:e54439449a4c90934f5c9025914e798c21808237688d2b7edb961ca96942ad73 27 minutes ago Exited autoscaler 0 260380e776bb4 | |
5b79d09477348 gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:1fefee2a41f33969ba0e2f398afcf0e308a0397b6b7d65d4b3630ab3229f764c 27 minutes ago Exited controller 0 f609cd5382100 | |
53404f7c76fa0 gcr.io/knative-releases/knative.dev/serving/cmd/networking/istio@sha256:0c4565fe9f49fb5e91fd745ebbc3f1f75dfc78b74c34e0f462a9c89651b86fe2 27 minutes ago Exited networking-istio 0 8e62e36adade1 | |
e46de18418730 istio/proxyv2@sha256:245b1b40003654e9a9e3757196fa3cb506439cf8c98792c1552300528d8aea14 28 minutes ago Exited istio-proxy 0 dd9c97a5a6f48 | |
4d4687900416d istio/proxyv2@sha256:245b1b40003654e9a9e3757196fa3cb506439cf8c98792c1552300528d8aea14 28 minutes ago Exited istio-proxy 0 33ca0b332ba51 | |
9a0c2dec393c3 5387eae023d64 28 minutes ago Exited tls-sidecar 0 847f298c93520 | |
f5f9b30fc6f04 istio/pilot@sha256:17aefbe996d67e9fdf5dbba90bdcf030333a0832a34bc66469472ce61d2eed76 28 minutes ago Exited discovery 0 5d790eae41a4d | |
37c8faf5a4af4 5387eae023d64 28 minutes ago Exited tls-sidecar 0 5d8238b0624bc | |
03ea7ff044d1e 5387eae023d64 28 minutes ago Exited tls-sidecar 0 903f991cf6a3f | |
040509d9b2f91 strimzi/kafka@sha256:179118d46c45cb37681176cbbf9c548d17e08152ad619548ec6ca532f578a1a3 28 minutes ago Exited zookeeper 0 903f991cf6a3f | |
283aa803112f7 strimzi/operator@sha256:c4e6c47444e45cef133aa7b34ef29fe2ebf9d3edc09c946c78db6a4359f4312d 29 minutes ago Exited strimzi-cluster-operator 0 d5df3c9d5e055 | |
51657d1b8d386 registry.hub.docker.com/library/registry@sha256:5eaafa2318aa0c4c52f95077c2a68bed0b13f6d2b464835723d4de1484052299 29 minutes ago Exited registry 0 f05edfc83475c | |
17f9910cd0c81 gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 29 minutes ago Exited registry-proxy 0 b7c4fa8a4d37a | |
eacd7cb4b0d8e eb516548c180f 30 minutes ago Exited coredns 0 50870500d9e0c | |
56fd4323d8f5f eb516548c180f 30 minutes ago Exited coredns 0 763e9d39c2d22 | |
d18541f5719e8 230ef35b6bb8e 30 minutes ago Exited kube-proxy 0 c31a53078be9d | |
047c3020ad3db 72c01550199f8 30 minutes ago Exited kube-scheduler 0 1760c0d4b8b1b | |
917c1135818ef 02d90e9441623 30 minutes ago Exited kube-controller-manager 0 05bca94d4b8c8 | |
eb0cc6975448a 2c4adeb21b4ff 30 minutes ago Exited etcd 0 31958c14db4ad | |
c3cb25d5c11eb bd12a212f9dcb 30 minutes ago Exited kube-addon-manager 0 387360f461b8e | |
==> coredns ["010ab3de3d8f"] <== | |
.:53 | |
2020-01-14T17:42:23.053Z [INFO] CoreDNS-1.3.1 | |
2020-01-14T17:42:23.054Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
CoreDNS-1.3.1 | |
linux/amd64, go1.11.4, 6b56a9c | |
2020-01-14T17:42:23.054Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 | |
==> coredns ["47d2a93292e6"] <== | |
.:53 | |
2020-01-14T17:42:24.392Z [INFO] CoreDNS-1.3.1 | |
2020-01-14T17:42:24.392Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
CoreDNS-1.3.1 | |
linux/amd64, go1.11.4, 6b56a9c | |
2020-01-14T17:42:24.392Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 | |
E0114 17:42:49.393815 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout | |
E0114 17:42:49.393819 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout | |
E0114 17:42:49.393815 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout | |
==> coredns ["56fd4323d8f5"] <== | |
.:53 | |
2020-01-14T17:14:29.720Z [INFO] CoreDNS-1.3.1 | |
2020-01-14T17:14:29.720Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
CoreDNS-1.3.1 | |
linux/amd64, go1.11.4, 6b56a9c | |
2020-01-14T17:14:29.720Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 | |
[INFO] SIGTERM: Shutting down servers then terminating | |
==> coredns ["eacd7cb4b0d8"] <== | |
.:53 | |
2020-01-14T17:14:29.776Z [INFO] CoreDNS-1.3.1 | |
2020-01-14T17:14:29.776Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
CoreDNS-1.3.1 | |
linux/amd64, go1.11.4, 6b56a9c | |
2020-01-14T17:14:29.776Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 | |
E0114 17:30:09.567832 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug="" | |
E0114 17:30:09.569574 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug="" | |
E0114 17:30:09.569715 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug="" | |
E0114 17:30:09.573034 1 reflector.go:251] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=2123&timeout=6m30s&timeoutSeconds=390&watch=true: dial tcp 10.96.0.1:443: connect: connection refused | |
E0114 17:30:09.572770 1 reflector.go:251] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2059&timeout=7m34s&timeoutSeconds=454&watch=true: dial tcp 10.96.0.1:443: connect: connection refused | |
E0114 17:30:09.573081 1 reflector.go:251] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=3023&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp 10.96.0.1:443: connect: connection refused | |
[INFO] SIGTERM: Shutting down servers then terminating | |
==> dmesg <== | |
[Jan14 17:41] ERROR: earlyprintk= earlyser already used | |
[ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED | |
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly | |
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it | |
[ +0.098146] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177) | |
[ +18.480659] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) | |
[ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) | |
[ +0.008954] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 | |
[ +1.909849] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument | |
[ +0.003877] systemd-fstab-generator[1118]: Ignoring "noauto" for root device | |
[ +0.003667] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. | |
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) | |
[ +0.698382] vboxguest: loading out-of-tree module taints kernel. | |
[ +0.003278] vboxguest: PCI device not found, probably running on physical hardware. | |
[ +0.120247] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. | |
[ +14.813723] systemd-fstab-generator[1919]: Ignoring "noauto" for root device | |
[ +8.729082] systemd-fstab-generator[2327]: Ignoring "noauto" for root device | |
[ +2.464947] kauditd_printk_skb: 110 callbacks suppressed | |
[Jan14 17:42] kauditd_printk_skb: 20 callbacks suppressed | |
[ +6.076214] kauditd_printk_skb: 86 callbacks suppressed | |
[ +14.315156] kauditd_printk_skb: 35 callbacks suppressed | |
[ +7.886108] kauditd_printk_skb: 32 callbacks suppressed | |
[ +8.121147] kauditd_printk_skb: 32 callbacks suppressed | |
[Jan14 17:43] kauditd_printk_skb: 8 callbacks suppressed | |
[ +12.593284] kauditd_printk_skb: 2 callbacks suppressed | |
[ +20.660774] NFSD: Unable to end grace period: -110 | |
[ +6.098043] kauditd_printk_skb: 8 callbacks suppressed | |
==> kernel <== | |
17:44:33 up 3 min, 0 users, load average: 3.95, 2.47, 1.01 | |
Linux knative 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux | |
PRETTY_NAME="Buildroot 2019.02.7" | |
==> kube-addon-manager ["40e89b325ff8"] <== | |
error: no objects passed to apply | |
error: no objects passed to apply | |
error: no objects passed to apply | |
error: no objects passed to apply | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:44:04+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:44:07+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:44:08+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:44:13+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:44:14+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:44:17+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:44:19+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:44:22+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
error: no objects passed to apply | |
error: no objects passed to apply | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:44:23+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:44:28+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:44:29+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:44:32+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
==> kube-addon-manager ["c3cb25d5c11e"] <== | |
error: no objects passed to apply | |
error: no objects passed to apply | |
error: no objects passed to apply | |
error: no objects passed to apply | |
error: no objects passed to apply | |
error: no objects passed to apply | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:29:43+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:29:44+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:29:47+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:29:49+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:29:52+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:29:53+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:29:57+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:29:59+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:30:02+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:30:04+00:00 == | |
INFO: Leader election disabled. | |
INFO: == Kubernetes addon ensure completed at 2020-01-14T17:30:07+00:00 == | |
INFO: == Reconciling with deprecated label == | |
INFO: == Reconciling with addon-manager label == | |
daemonset.apps/registry-proxy unchanged | |
replicationcontroller/registry unchanged | |
service/registry unchanged | |
serviceaccount/storage-provisioner unchanged | |
INFO: == Kubernetes addon reconcile completed at 2020-01-14T17:30:08+00:00 == | |
==> kube-apiserver ["06771fafbf2c"] <== | |
W0114 17:42:36.271848 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.313990 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.321247 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.333950 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.337594 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.341786 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.343902 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.346173 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.347560 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.354750 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.356634 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.363842 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
W0114 17:42:36.365750 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: dial tcp 10.105.116.151:443: connect: connection refused | |
I0114 17:42:36.593390 1 client.go:352] parsed scheme: "" | |
I0114 17:42:36.593421 1 client.go:352] scheme "" not registered, fallback to default scheme | |
I0114 17:42:36.593462 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
I0114 17:42:36.593501 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.600106 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.600904 1 client.go:352] parsed scheme: "" | |
I0114 17:42:36.600913 1 client.go:352] scheme "" not registered, fallback to default scheme | |
I0114 17:42:36.600932 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
I0114 17:42:36.601106 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.608702 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.743146 1 client.go:352] parsed scheme: "" | |
I0114 17:42:36.743180 1 client.go:352] scheme "" not registered, fallback to default scheme | |
I0114 17:42:36.743201 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
I0114 17:42:36.743254 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.750747 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.793447 1 client.go:352] parsed scheme: "" | |
I0114 17:42:36.793479 1 client.go:352] scheme "" not registered, fallback to default scheme | |
I0114 17:42:36.793510 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
I0114 17:42:36.793535 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
E0114 17:42:36.796350 1 controller.go:114] loading OpenAPI spec for "v1beta1.custom.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error: 'dial tcp 10.108.207.105:443: i/o timeout' | |
Trying to reach: 'https://10.108.207.105:443/openapi/v2', Header: map[] | |
I0114 17:42:36.796383 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.custom.metrics.k8s.io: Rate Limited Requeue. | |
I0114 17:42:36.802395 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.893430 1 client.go:352] parsed scheme: "" | |
I0114 17:42:36.893461 1 client.go:352] scheme "" not registered, fallback to default scheme | |
I0114 17:42:36.893517 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
I0114 17:42:36.893585 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.906711 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:36.994297 1 client.go:352] parsed scheme: "" | |
I0114 17:42:36.994326 1 client.go:352] scheme "" not registered, fallback to default scheme | |
I0114 17:42:36.994355 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
I0114 17:42:36.994463 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
I0114 17:42:37.009246 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
W0114 17:42:37.588898 1 dispatcher.go:74] Failed calling webhook, failing closed sinkbindings.webhook.sources.knative.dev: failed calling webhook "sinkbindings.webhook.sources.knative.dev": Post https://eventing-webhook.knative-eventing.svc:443/sinkbindings?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
I0114 17:42:37.589339 1 trace.go:81] Trace[62971283]: "Create /apis/apps/v1/namespaces/kube-system/deployments" (started: 2020-01-14 17:42:07.582030757 +0000 UTC m=+6.453601656) (total time: 30.007293043s): | |
Trace[62971283]: [30.007293043s] [30.001427106s] END | |
I0114 17:42:46.431134 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.custom.metrics.k8s.io | |
E0114 17:42:46.432856 1 controller.go:114] loading OpenAPI spec for "v1beta1.custom.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 415, Body: invalid Content-Type, want `application/json` | |
, Header: map[Content-Length:[46] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 14 Jan 2020 17:42:46 GMT] X-Content-Type-Options:[nosniff]] | |
I0114 17:42:46.432899 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.custom.metrics.k8s.io: Rate Limited Requeue. | |
I0114 17:43:05.926067 1 controller.go:606] quota admission added evaluator for: deployments.apps | |
I0114 17:43:46.433188 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.custom.metrics.k8s.io | |
E0114 17:43:46.434060 1 controller.go:114] loading OpenAPI spec for "v1beta1.custom.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 415, Body: invalid Content-Type, want `application/json` | |
, Header: map[Content-Length:[46] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 14 Jan 2020 17:43:46 GMT] X-Content-Type-Options:[nosniff]] | |
I0114 17:43:46.434087 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.custom.metrics.k8s.io: Rate Limited Requeue. | |
I0114 17:44:25.821545 1 controller.go:606] quota admission added evaluator for: statefulsets.apps | |
I0114 17:44:26.580643 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io | |
==> kube-controller-manager ["917c1135818e"] <== | |
I0114 17:28:25.116068 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:25.116558 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:25.119031 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:25.119280 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:25.119568 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:40.117202 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:40.117764 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:40.117795 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:40.119518 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:40.119561 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:40.119591 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:40.122286 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:40.122325 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:40.122341 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:56.226278 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:56.226312 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:56.226330 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:56.228581 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:56.228776 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:56.228796 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:28:56.230872 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:56.230895 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:28:56.230905 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:11.231425 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:11.231659 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:11.231690 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:11.233719 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:11.233930 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:11.233982 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:11.236766 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:11.236813 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:11.236824 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:21.384136 1 pvc_protection_controller.go:138] PVC kafka/data-my-cluster-zookeeper-0 failed with : Operation cannot be fulfilled on persistentvolumeclaims "data-my-cluster-zookeeper-0": the object has been modified; please apply your changes to the latest version and try again | |
E0114 17:29:27.340958 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:27.341117 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:27.341266 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:27.343585 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:27.343761 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:27.343883 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:27.345723 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:27.345953 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:27.345983 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:42.344962 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:42.345246 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:42.345326 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:42.347003 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:42.347387 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:42.347415 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:42.348920 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:42.348955 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:42.348967 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:58.453293 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:58.453430 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:58.453486 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:58.456225 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:58.456339 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:58.456400 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:29:58.458949 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:58.459073 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:29:58.459165 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
==> kube-controller-manager ["f1034fa99854"] <== | |
I0114 17:43:07.075705 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:07.075874 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:07.079919 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:07.080000 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:07.080035 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:09.091803 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: invalid Content-Type, want `application/json` | |
W0114 17:43:13.297235 1 garbagecollector.go:644] failed to discover some groups: map[custom.metrics.k8s.io/v1beta1:invalid Content-Type, want `application/json`] | |
E0114 17:43:20.953051 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:20.953069 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:20.953109 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:20.957810 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:20.958218 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:20.958255 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:20.960979 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:20.961153 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:20.961277 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:37.050534 1 memcache.go:199] couldn't get resource list for custom.metrics.k8s.io/v1beta1: invalid Content-Type, want `application/json` | |
E0114 17:43:37.054316 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:37.054479 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:37.054506 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:37.057787 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:37.057837 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:37.057858 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:37.063030 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:37.063063 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:37.063077 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:40.193960 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: invalid Content-Type, want `application/json` | |
W0114 17:43:46.999274 1 garbagecollector.go:644] failed to discover some groups: map[custom.metrics.k8s.io/v1beta1:invalid Content-Type, want `application/json`] | |
E0114 17:43:50.952766 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:50.952906 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:50.952946 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:52.061344 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:52.061409 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:52.061430 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:43:52.068913 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:52.068952 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:43:52.068964 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:07.051466 1 memcache.go:199] couldn't get resource list for custom.metrics.k8s.io/v1beta1: invalid Content-Type, want `application/json` | |
E0114 17:44:07.055887 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:07.055947 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:07.055974 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:07.065571 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:07.065979 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:07.066011 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:07.078626 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:07.078731 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:07.078777 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:11.296730 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: invalid Content-Type, want `application/json` | |
W0114 17:44:20.701877 1 garbagecollector.go:644] failed to discover some groups: map[custom.metrics.k8s.io/v1beta1:invalid Content-Type, want `application/json`] | |
E0114 17:44:20.953750 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-pilot: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:20.953819 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:20.953841 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-pilot", UID:"8b738f49-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"981", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:22.070513 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:22.070705 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:22.070794 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"knative-serving", Name:"activator", UID:"a4b3dcaf-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1351", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:22.081156 1 horizontal.go:214] failed to compute desired number of replicas based on listed metrics for Deployment/istio-system/istio-ingressgateway: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:22.081195 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
I0114 17:44:22.081207 1 event.go:209] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"istio-system", Name:"istio-ingressgateway", UID:"8b62e1c3-36f1-11ea-be34-5e653e2cb35e", APIVersion:"autoscaling/v2beta2", ResourceVersion:"979", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) | |
E0114 17:44:26.110815 1 pvc_protection_controller.go:138] PVC kafka/data-0-my-cluster-kafka-0 failed with : Operation cannot be fulfilled on persistentvolumeclaims "data-0-my-cluster-kafka-0": the object has been modified; please apply your changes to the latest version and try again | |
E0114 17:44:26.981240 1 pvc_protection_controller.go:138] PVC kafka/data-my-cluster-zookeeper-0 failed with : Operation cannot be fulfilled on persistentvolumeclaims "data-my-cluster-zookeeper-0": the object has been modified; please apply your changes to the latest version and try again | |
==> kube-proxy ["99d2afdb3987"] <== | |
W0114 17:42:21.630191 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
I0114 17:42:21.678027 1 server_others.go:146] Using iptables Proxier. | |
W0114 17:42:21.678715 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic | |
I0114 17:42:21.679939 1 server.go:562] Version: v1.14.7 | |
I0114 17:42:21.920379 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
I0114 17:42:21.920420 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
I0114 17:42:21.920472 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
I0114 17:42:21.920498 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
I0114 17:42:21.923757 1 config.go:202] Starting service config controller | |
I0114 17:42:21.923793 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
I0114 17:42:21.923829 1 config.go:102] Starting endpoints config controller | |
I0114 17:42:21.923839 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
I0114 17:42:22.030836 1 controller_utils.go:1034] Caches are synced for service config controller | |
I0114 17:42:22.031042 1 controller_utils.go:1034] Caches are synced for endpoints config controller | |
==> kube-proxy ["d18541f5719e"] <== | |
W0114 17:14:28.211774 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
I0114 17:14:28.223198 1 server_others.go:146] Using iptables Proxier. | |
W0114 17:14:28.223283 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic | |
I0114 17:14:28.223402 1 server.go:562] Version: v1.14.7 | |
I0114 17:14:28.228966 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
I0114 17:14:28.229022 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
I0114 17:14:28.229073 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
I0114 17:14:28.229102 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
I0114 17:14:28.229212 1 config.go:102] Starting endpoints config controller | |
I0114 17:14:28.229274 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
I0114 17:14:28.229309 1 config.go:202] Starting service config controller | |
I0114 17:14:28.229318 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
I0114 17:14:28.329563 1 controller_utils.go:1034] Caches are synced for endpoints config controller | |
I0114 17:14:28.329516 1 controller_utils.go:1034] Caches are synced for service config controller | |
E0114 17:30:09.564187 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3023&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused | |
E0114 17:30:09.564449 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2123&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused | |
==> kube-scheduler ["047c3020ad3d"] <== | |
I0114 17:14:14.220419 1 serving.go:319] Generated self-signed cert in-memory | |
W0114 17:14:14.914469 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. | |
W0114 17:14:14.914500 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. | |
W0114 17:14:14.914510 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. | |
I0114 17:14:14.917245 1 server.go:142] Version: v1.14.7 | |
I0114 17:14:14.917297 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory | |
W0114 17:14:14.918279 1 authorization.go:47] Authorization is disabled | |
W0114 17:14:14.918359 1 authentication.go:55] Authentication is disabled | |
I0114 17:14:14.918387 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251 | |
I0114 17:14:14.919166 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259 | |
E0114 17:14:17.025067 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
E0114 17:14:17.025215 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
E0114 17:14:17.029981 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
E0114 17:14:17.034751 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
E0114 17:14:17.045937 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
E0114 17:14:17.048303 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
E0114 17:14:17.048449 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
E0114 17:14:17.048512 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
E0114 17:14:17.050078 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
E0114 17:14:17.050281 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
E0114 17:14:18.027864 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
E0114 17:14:18.030913 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
E0114 17:14:18.033409 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
E0114 17:14:18.035493 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
E0114 17:14:18.046761 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
E0114 17:14:18.049433 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
E0114 17:14:18.051096 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
E0114 17:14:18.052216 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
E0114 17:14:18.053451 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
E0114 17:14:18.055747 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
I0114 17:14:19.922625 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller | |
I0114 17:14:20.023547 1 controller_utils.go:1034] Caches are synced for scheduler controller | |
I0114 17:14:20.023884 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler... | |
I0114 17:14:20.029881 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler | |
E0114 17:15:23.807294 1 factory.go:660] Error scheduling kafka/my-cluster-zookeeper-0: pod has unbound immediate PersistentVolumeClaims; retrying | |
E0114 17:15:23.820889 1 scheduler.go:481] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims | |
E0114 17:15:23.821568 1 factory.go:660] Error scheduling kafka/my-cluster-zookeeper-0: pod has unbound immediate PersistentVolumeClaims; retrying | |
E0114 17:15:23.821602 1 scheduler.go:481] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims | |
E0114 17:15:25.930628 1 factory.go:660] Error scheduling kafka/my-cluster-zookeeper-0: pod has unbound immediate PersistentVolumeClaims; retrying | |
E0114 17:15:25.931610 1 scheduler.go:481] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims | |
==> kube-scheduler ["7d3fd053187f"] <== | |
I0114 17:42:02.258894 1 serving.go:319] Generated self-signed cert in-memory | |
W0114 17:42:03.089820 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. | |
W0114 17:42:03.089847 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. | |
W0114 17:42:03.089867 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. | |
I0114 17:42:03.100630 1 server.go:142] Version: v1.14.7 | |
I0114 17:42:03.101307 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory | |
W0114 17:42:03.103789 1 authorization.go:47] Authorization is disabled | |
W0114 17:42:03.103912 1 authentication.go:55] Authentication is disabled | |
I0114 17:42:03.103984 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251 | |
I0114 17:42:03.104462 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259 | |
E0114 17:42:05.203701 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
E0114 17:42:05.216453 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
E0114 17:42:05.216529 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
E0114 17:42:05.216600 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
E0114 17:42:05.216639 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
E0114 17:42:05.222440 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
E0114 17:42:05.222586 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
E0114 17:42:05.222637 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
E0114 17:42:05.222713 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
E0114 17:42:05.222765 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
I0114 17:42:07.109835 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller | |
I0114 17:42:07.210166 1 controller_utils.go:1034] Caches are synced for scheduler controller | |
I0114 17:42:07.210338 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler... | |
I0114 17:42:25.702309 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler | |
==> kubelet <== | |
-- Logs begin at Tue 2020-01-14 17:41:32 UTC, end at Tue 2020-01-14 17:44:33 UTC. -- | |
Jan 14 17:42:16 knative kubelet[2395]: W0114 17:42:16.660261 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rab3d01034de048c2aa6429f803dedd56.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-rab3d01034de048c2aa6429f803dedd56.scope: no such file or directory | |
Jan 14 17:42:16 knative kubelet[2395]: W0114 17:42:16.660049 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r63610792a485442297c0f42dc221ab14.scope": none of the resources are being tracked. | |
Jan 14 17:42:16 knative kubelet[2395]: W0114 17:42:16.660569 2395 container.go:409] Failed to create summary reader for "/system.slice/run-rb78a9202ad7d418ea874cf5397bfe9ac.scope": none of the resources are being tracked. | |
Jan 14 17:42:16 knative kubelet[2395]: W0114 17:42:16.660724 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r5e97e1cb89884f8697be14c986e29cb6.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310042 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-rd7da2721a44940f397102d8673277052.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-rd7da2721a44940f397102d8673277052.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310104 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rd7da2721a44940f397102d8673277052.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-rd7da2721a44940f397102d8673277052.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310123 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rd7da2721a44940f397102d8673277052.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-rd7da2721a44940f397102d8673277052.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310137 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310147 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310158 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310176 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r0c4cfb5fea724662aca8daae532efea9.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310190 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310201 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310213 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.310224 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r58b3456c240e4b2abc10ca7021112b01.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.409461 2395 pod_container_deletor.go:75] Container "cf01733f5b4b656b5dabe383bae5372051d1c25be2db9c7ed21d326521ac03ec" not found in pod's containers | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.572474 2395 container.go:409] Failed to create summary reader for "/system.slice/run-raf9728ce34f44b46a53774a693de1325.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.572703 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r4194df061b934e16b8bc622ffb50ed65.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.572842 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r049c248b85c44130a992da3bff4e238e.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.572979 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r7e36fa5f206749b48976370736a56e2c.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573128 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r20a8e8630e41451a9e5fc3ae13bc74a4.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573332 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r51f06b8b84e54d7790d4c7e120927835.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573472 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r0f35deb4c0ef4f7089d22e1e79861b5f.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573610 2395 container.go:409] Failed to create summary reader for "/system.slice/run-r85e83555fad34d4fba608a7086ec1982.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573792 2395 container.go:409] Failed to create summary reader for "/system.slice/run-rd7da2721a44940f397102d8673277052.scope": none of the resources are being tracked. | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573942 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573972 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.573995 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope: no such file or directory | |
Jan 14 17:42:18 knative kubelet[2395]: W0114 17:42:18.574084 2395 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r9cb08534fd384d12841398d9d47d429f.scope: no such file or directory | |
Jan 14 17:42:19 knative kubelet[2395]: W0114 17:42:19.064212 2395 pod_container_deletor.go:75] Container "a938b2ee9af7d84821e8bcb58f04486423b5890a98b6b50e4a824eb1d1ce5e0e" not found in pod's containers | |
Jan 14 17:42:19 knative kubelet[2395]: W0114 17:42:19.996476 2395 pod_container_deletor.go:75] Container "6ab340dfd566643c6cfbb0a33f756d2232a32e889094b6aa65918e8b5042b7b1" not found in pod's containers | |
Jan 14 17:42:20 knative kubelet[2395]: E0114 17:42:20.013806 2395 remote_runtime.go:321] ContainerStatus "b4f8699af7f90cca380d657a753497ab3e981d33c657edb39d2049efe611800f" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: b4f8699af7f90cca380d657a753497ab3e981d33c657edb39d2049efe611800f | |
Jan 14 17:42:20 knative kubelet[2395]: E0114 17:42:20.013853 2395 kuberuntime_manager.go:921] getPodContainerStatuses for pod "eventing-webhook-5867c98d9b-b8kxc_knative-eventing(c4e5ad41-36f1-11ea-be34-5e653e2cb35e)" failed: rpc error: code = Unknown desc = Error: No such container: b4f8699af7f90cca380d657a753497ab3e981d33c657edb39d2049efe611800f | |
Jan 14 17:42:22 knative kubelet[2395]: W0114 17:42:22.117272 2395 pod_container_deletor.go:75] Container "beea8dad274cf4fc6057b8fff1d4961ccc2e337b5e2cd5418b50b4c3d216032c" not found in pod's containers | |
Jan 14 17:42:22 knative kubelet[2395]: W0114 17:42:22.189472 2395 pod_container_deletor.go:75] Container "a32e0607310b87b216474f8fdbfba543b228d98d8c922d27816b696260fbb214" not found in pod's containers | |
Jan 14 17:42:22 knative kubelet[2395]: W0114 17:42:22.220321 2395 pod_container_deletor.go:75] Container "a9ab90c159d3d2445c3c906499272d14929d53bccfbcabbee0e46ba44d35b6bb" not found in pod's containers | |
Jan 14 17:42:22 knative kubelet[2395]: W0114 17:42:22.272050 2395 pod_container_deletor.go:75] Container "ca786bc68103544704aadabe0d6c255097527ff72b1a744308474c87f1bc37eb" not found in pod's containers | |
Jan 14 17:42:22 knative kubelet[2395]: W0114 17:42:22.710757 2395 pod_container_deletor.go:75] Container "5d2b5c7eb1d0f215ecd4af382f2f5517514fabcaad43ac77d654d07fe49c391e" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.086958 2395 pod_container_deletor.go:75] Container "62edbd6d96ec868540136a0c2e76853fc6fea9525a158e4f98ec204900f2a3a1" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.140808 2395 pod_container_deletor.go:75] Container "2cf59b1abe3ee17022a953696463daabb5a0392447893365e55cc1320d8ab80d" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.217810 2395 pod_container_deletor.go:75] Container "5b9ff3fd5284af2b3e772ca6c72fab61a6c1e6cc11d02e7c19d4dc274adb4cd0" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.369873 2395 pod_container_deletor.go:75] Container "8986a3c07c71576a0620579dfaadca2fd843d88136aae72513913975c70c6293" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.396672 2395 pod_container_deletor.go:75] Container "7ceed88d144a9fff95dd5bb27fe5f11047663547f8b09e685159c4796fd21935" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.471914 2395 pod_container_deletor.go:75] Container "f566c83cd63e40ff67bc309d0f73f51bc7d415d601bf15b88af42dbe0177a83a" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.494465 2395 pod_container_deletor.go:75] Container "3676036283fd04aabdfc563eea37f0bc8a916b90b97f6f0efbf671af16a96b45" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.520051 2395 pod_container_deletor.go:75] Container "ed7e40f79dd793060638ce881a552bb1b02ddca716db43fbf46c9ebd6c41bc2b" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.542912 2395 pod_container_deletor.go:75] Container "985bd761368db3afba3b97ae812093f99603e550bb590517e9ea32518d654d7e" not found in pod's containers | |
Jan 14 17:42:23 knative kubelet[2395]: W0114 17:42:23.565243 2395 pod_container_deletor.go:75] Container "0b25249c9f9f2fdbe370da6dc257533510f9f9e7784bf3f9e1466e2f17dd001a" not found in pod's containers | |
Jan 14 17:42:39 knative kubelet[2395]: E0114 17:42:39.727005 2395 pod_workers.go:190] Error syncing pod 962bce43-36f1-11ea-be34-5e653e2cb35e ("my-cluster-entity-operator-7d677bdf7b-mpg5f_kafka(962bce43-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "user-operator" with CrashLoopBackOff: "Back-off 10s restarting failed container=user-operator pod=my-cluster-entity-operator-7d677bdf7b-mpg5f_kafka(962bce43-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:41 knative kubelet[2395]: E0114 17:42:41.864902 2395 pod_workers.go:190] Error syncing pod 87eefc57-36f1-11ea-be34-5e653e2cb35e ("my-cluster-kafka-0_kafka(87eefc57-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "kafka" with CrashLoopBackOff: "Back-off 10s restarting failed container=kafka pod=my-cluster-kafka-0_kafka(87eefc57-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:45 knative kubelet[2395]: E0114 17:42:45.210268 2395 pod_workers.go:190] Error syncing pod 87eefc57-36f1-11ea-be34-5e653e2cb35e ("my-cluster-kafka-0_kafka(87eefc57-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "kafka" with CrashLoopBackOff: "Back-off 10s restarting failed container=kafka pod=my-cluster-kafka-0_kafka(87eefc57-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:47 knative kubelet[2395]: E0114 17:42:47.613100 2395 pod_workers.go:190] Error syncing pod 962bce43-36f1-11ea-be34-5e653e2cb35e ("my-cluster-entity-operator-7d677bdf7b-mpg5f_kafka(962bce43-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "user-operator" with CrashLoopBackOff: "Back-off 10s restarting failed container=user-operator pod=my-cluster-entity-operator-7d677bdf7b-mpg5f_kafka(962bce43-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:49 knative kubelet[2395]: E0114 17:42:49.143187 2395 pod_workers.go:190] Error syncing pod 521e93a9-36f1-11ea-be34-5e653e2cb35e ("storage-provisioner_kube-system(521e93a9-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(521e93a9-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:49 knative kubelet[2395]: E0114 17:42:49.157407 2395 pod_workers.go:190] Error syncing pod f1aa3f16-36f1-11ea-be34-5e653e2cb35e ("kafka-ch-controller-7db7474c98-sqkm2_knative-eventing(f1aa3f16-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "controller" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller pod=kafka-ch-controller-7db7474c98-sqkm2_knative-eventing(f1aa3f16-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:51 knative kubelet[2395]: E0114 17:42:51.246937 2395 pod_workers.go:190] Error syncing pod c4d0533e-36f1-11ea-be34-5e653e2cb35e ("eventing-controller-666b79d867-w25wb_knative-eventing(c4d0533e-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "eventing-controller" with CrashLoopBackOff: "Back-off 10s restarting failed container=eventing-controller pod=eventing-controller-666b79d867-w25wb_knative-eventing(c4d0533e-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:52 knative kubelet[2395]: E0114 17:42:52.286593 2395 pod_workers.go:190] Error syncing pod c5169396-36f1-11ea-be34-5e653e2cb35e ("imc-dispatcher-7b55b86649-tcpbr_knative-eventing(c5169396-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "dispatcher" with CrashLoopBackOff: "Back-off 10s restarting failed container=dispatcher pod=imc-dispatcher-7b55b86649-tcpbr_knative-eventing(c5169396-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:52 knative kubelet[2395]: E0114 17:42:52.312394 2395 pod_workers.go:190] Error syncing pod f1b82fd2-36f1-11ea-be34-5e653e2cb35e ("kafka-ch-dispatcher-64fc4db47-g6sh5_knative-eventing(f1b82fd2-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "dispatcher" with CrashLoopBackOff: "Back-off 10s restarting failed container=dispatcher pod=kafka-ch-dispatcher-64fc4db47-g6sh5_knative-eventing(f1b82fd2-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:42:52 knative kubelet[2395]: E0114 17:42:52.335656 2395 pod_workers.go:190] Error syncing pod a4b8a3d1-36f1-11ea-be34-5e653e2cb35e ("autoscaler-hpa-cfc55cc88-pzgqd_knative-serving(a4b8a3d1-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "autoscaler-hpa" with CrashLoopBackOff: "Back-off 10s restarting failed container=autoscaler-hpa pod=autoscaler-hpa-cfc55cc88-pzgqd_knative-serving(a4b8a3d1-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:43:13 knative kubelet[2395]: E0114 17:43:13.597062 2395 pod_workers.go:190] Error syncing pod f1b82fd2-36f1-11ea-be34-5e653e2cb35e ("kafka-ch-dispatcher-64fc4db47-g6sh5_knative-eventing(f1b82fd2-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "dispatcher" with CrashLoopBackOff: "Back-off 20s restarting failed container=dispatcher pod=kafka-ch-dispatcher-64fc4db47-g6sh5_knative-eventing(f1b82fd2-36f1-11ea-be34-5e653e2cb35e)" | |
Jan 14 17:43:24 knative kubelet[2395]: E0114 17:43:24.751433 2395 pod_workers.go:190] Error syncing pod f1b82fd2-36f1-11ea-be34-5e653e2cb35e ("kafka-ch-dispatcher-64fc4db47-g6sh5_knative-eventing(f1b82fd2-36f1-11ea-be34-5e653e2cb35e)"), skipping: failed to "StartContainer" for "dispatcher" with CrashLoopBackOff: "Back-off 20s restarting failed container=dispatcher pod=kafka-ch-dispatcher-64fc4db47-g6sh5_knative-eventing(f1b82fd2-36f1-11ea-be34-5e653e2cb35e)" | |
==> storage-provisioner ["2357d1a66d0d"] <== | |
F0114 17:42:48.340120 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout | |
==> storage-provisioner ["f0699480a0da"] <== |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment