Skip to content

Instantly share code, notes, and snippets.

@SanSan-
Last active June 20, 2021 19:49
Show Gist options
  • Save SanSan-/3ae752950d2e6b8f7e0171b5cd7bae26 to your computer and use it in GitHub Desktop.
Save SanSan-/3ae752950d2e6b8f7e0171b5cd7bae26 to your computer and use it in GitHub Desktop.
crc start debug + crc console-ring log
$ crc start -p ~/Downloads/pull-secret.txt --log-level debug
DEBU CodeReady Containers version: 1.28.0+08de64bd
DEBU OpenShift version: 4.7.13 (not embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 17179869184 bytes
DEBU No new version available. The latest version is 1.28.0
INFO Checking if running as non-root
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
DEBU Checking if an older admin-helper executable is installed
DEBU No older admin-helper executable found
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
DEBU Total memory of system is 17179869184 bytes
INFO Checking if running emulated on a M1 CPU
INFO Checking if HyperKit is installed
INFO Checking if crc-driver-hyperkit is installed
DEBU Checking file: /Users/b/.crc/machines/crc/.crc-exist
DEBU Using secret from configuration
INFO Loading bundle: crc_hyperkit_4.7.13...
INFO Creating CodeReady Containers VM for OpenShift 4.7.13...
DEBU Found binary path at /Applications/CodeReady Containers.app/Contents/Resources/crc-driver-hyperkit
DEBU Launching plugin server for driver hyperkit
DEBU Plugin server listening at address 127.0.0.1:50018
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetMachineName
DEBU (crc) Calling .DriverName
DEBU Running pre-create checks...
DEBU (crc) Calling .PreCreateCheck
DEBU (crc) Calling .GetConfigRaw
DEBU Creating machine...
DEBU (crc) Calling .Create
INFO Generating new SSH Key pair...
INFO Generating new password for the kubeadmin user
INFO Copying kubeconfig file to instance dir...
DEBU Copying '/Users/b/.crc/cache/crc_hyperkit_4.7.13/kubeconfig' to '/Users/b/.crc/machines/crc/kubeconfig'
DEBU Created /Users/b/.crc/machines/crc/.crc-exist
DEBU Machine successfully created
DEBU Found binary path at /Applications/CodeReady Containers.app/Contents/Resources/crc-driver-hyperkit
DEBU Launching plugin server for driver hyperkit
DEBU Plugin server listening at address 127.0.0.1:51367
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetBundleName
DEBU (crc) Calling .GetState
INFO Starting CodeReady Containers VM for OpenShift 4.7.13...
DEBU Updating CRC VM configuration
DEBU (crc) Calling .GetConfigRaw
DEBU (crc) Calling .Start
DEBU (crc) DBG | time="2021-06-20T22:12:34+03:00" level=debug msg="Using hyperkit binary from /Applications/CodeReady Containers.app/Contents/Resources/hyperkit"
DEBU (crc) DBG | time="2021-06-20T22:12:34+03:00" level=debug msg="Starting with cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-3d116c3f18e7d4dc21dcc5953f042877c035e09e54b687268aefc4db876faf32/vmlinuz-4.18.0-240.22.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/3d116c3f18e7d4dc21dcc5953f042877c035e09e54b687268aefc4db876faf32/0 root=UUID=1157d507-2596-4188-a8cc-adb244bf7be1 rw rootflags=prjquota"
DEBU (crc) DBG | time="2021-06-20T22:12:34+03:00" level=debug msg="Trying to execute /Applications/CodeReady Containers.app/Contents/Resources/hyperkit -A -u -F /Users/b/.crc/machines/crc/hyperkit.pid -c 4 -m 9216M -s 0:0,hostbridge -s 31,lpc -U c3d68012-0208-11ea-9fd7-f2189899ab08 -s 1:0,virtio-blk,file:///Users/b/.crc/machines/crc/crc.qcow2,format=qcow -s 2,virtio-sock,guest_cid=3,path=/Users/b/.crc/machines/crc -s 3,virtio-rnd -l com1,autopty=/Users/b/.crc/machines/crc/tty,log=/Users/b/.crc/machines/crc/console-ring -f kexec,/Users/b/.crc/cache/crc_hyperkit_4.7.13/vmlinuz-4.18.0-240.22.1.el8_3.x86_64,/Users/b/.crc/cache/crc_hyperkit_4.7.13/initramfs-4.18.0-240.22.1.el8_3.x86_64.img,earlyprintk=serial BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-3d116c3f18e7d4dc21dcc5953f042877c035e09e54b687268aefc4db876faf32/vmlinuz-4.18.0-240.22.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/3d116c3f18e7d4dc21dcc5953f042877c035e09e54b687268aefc4db876faf32/0 root=UUID=1157d507-2596-4188-a8cc-adb244bf7be1 rw rootflags=prjquota"
DEBU (crc) DBG | time="2021-06-20T22:12:34+03:00" level=debug msg="error: Temporary Error: hyperkit not running yet - sleeping 1s"
DEBU (crc) DBG | time="2021-06-20T22:12:35+03:00" level=debug msg="retry loop 1"
DEBU (crc) Calling .GetConfigRaw
DEBU Waiting for machine to be running, this may take a few minutes...
DEBU retry loop: attempt 0
DEBU (crc) Calling .GetState
DEBU Machine is up and running!
DEBU (crc) Calling .GetState
INFO CodeReady Containers instance is running with IP 127.0.0.1
DEBU Waiting until ssh is available
DEBU retry loop: attempt 0
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51369->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51369->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51386->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51386->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51403->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51403->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51416->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51416->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51428->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51428->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51440->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51440->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 6
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51452->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51452->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 7
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51469->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51469->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 8
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51481->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51481->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 9
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51495->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51495->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 10
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51501->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51501->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 11
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51522->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51522->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 12
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51536->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51536->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 13
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51543->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51543->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 14
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51557->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51557->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 15
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51572->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51572->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 16
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51592->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51592->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 17
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51604->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51604->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 18
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51616->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51616->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 19
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51626->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51626->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 20
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51638->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51638->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 21
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51657->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51657->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 22
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51667->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51667->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 23
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:51680->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err : ssh: handshake failed: read tcp 127.0.0.1:51680->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 24
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/Users/b/.crc/cache/crc_hyperkit_4.7.13/id_ecdsa_crc /Users/b/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: <nil>, output:
INFO CodeReady Containers VM is running
DEBU Running SSH command: cat /home/core/.ssh/authorized_keys
DEBU SSH command results: err: <nil>, output: ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAF2o2ZsO2mYe6oPpn3E/QyiiSvzvrZxbvsETRe4SDrFKhrk3VmoSzl3jT8gJDMpHRNFivOOq3tQVn3DQ4cxmEyfIwHL7aE5ee1N8dk2UCLzHwFo0uxVjsacVEBFNn0VehzVIdYlVCUN92lAEx4e30YJGjvnlsA7dWh5G59andQyzGO7zg== core
INFO Updating authorized keys...
DEBU Running SSH command: echo 'ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACVjipbi8uShYzgFEDaDbZ8FEMs7jKmfZYJ4u78CACYPLtBJgHoIrMaipv61SJf2CGP5UvCiFgZPAfCFBaxxxpF2QHgAnQDH5aCPJRn0fvLHn10ntrkuyAKXgoo4+QqGwkRp4kA77XmI0BmlPTk1rebrAFE8jB6f/AB+Oq00pbITU1qMQ==
' > /home/core/.ssh/authorized_keys; chmod 644 /home/core/.ssh/authorized_keys
DEBU SSH command results: err: <nil>, output:
DEBU Running SSH command: realpath /dev/disk/by-label/root
DEBU SSH command results: err: <nil>, output: /dev/vda4
DEBU Using root access: Growing /dev/vda4 partition
DEBU Running SSH command: sudo /usr/bin/growpart /dev/vda 4
DEBU SSH command results: err: Process exited with status 1, output: NOCHANGE: partition 4 is size 63961055. it cannot be grown
DEBU No free space after /dev/vda4, nothing to do
DEBU Using root access: make root Podman socket accessible
DEBU Running SSH command: sudo chmod 777 /run/podman/ /run/podman/podman.sock
DEBU SSH command results: err: <nil>, output:
DEBU Running '/Applications/CodeReady Containers.app/Contents/Resources/crc-admin-helper-darwin rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing'
DEBU Running '/Applications/CodeReady Containers.app/Contents/Resources/crc-admin-helper-darwin add 127.0.0.1 api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing'
DEBU Creating /etc/resolv.conf with permissions 0644 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU retry loop: attempt 0
DEBU Running SSH command: host -R 3 foo.apps-crc.testing
DEBU SSH command results: err: <nil>, output: foo.apps-crc.testing has address 192.168.127.2
INFO Check internal and public DNS query...
DEBU Running SSH command: host -R 3 quay.io
DEBU SSH command results: err: <nil>, output: quay.io has address 54.156.10.58
quay.io has address 3.233.133.41
quay.io has address 54.197.99.84
quay.io has address 34.197.63.98
quay.io has address 34.224.196.162
quay.io has address 50.16.140.223
quay.io has address 52.4.104.248
quay.io has address 3.213.173.170
INFO Check DNS query from host...
DEBU api.crc.testing resolved to [::1 127.0.0.1]
DEBU Running SSH command: test -e /var/lib/kubelet/config.json
DEBU SSH command results: err: Process exited with status 1, output:
INFO Adding user's pull secret to instance disk...
DEBU Creating /var/lib/kubelet/config.json with permissions 0600 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
INFO Verifying validity of the kubelet certificates...
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-07-11T06:48:43+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-server-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-07-11T06:49:33+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-07-11T06:52:45+00:00
INFO Starting OpenShift kubelet service
DEBU Using root access: Executing systemctl daemon-reload command
DEBU Running SSH command: sudo systemctl daemon-reload
DEBU SSH command results: err: <nil>, output:
DEBU Using root access: Executing systemctl start kubelet
DEBU Running SSH command: sudo systemctl start kubelet
DEBU SSH command results: err: <nil>, output:
INFO Waiting for kube-apiserver availability... [takes around 2min]
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 6
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 7
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 8
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 9
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 10
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 11
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 12
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 13
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 14
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 15
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 16
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 17
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 18
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 19
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 20
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 21
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 22
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 23
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 24
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 25
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 26
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 27
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 28
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 29
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 30
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 31
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 32
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 33
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 34
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 35
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 36
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 37
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 38
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 39
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 40
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 41
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 42
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 43
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 44
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 45
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 46
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 47
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 48
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 49
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 50
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 51
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 52
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 53
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 54
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 55
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 56
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 57
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 58
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n - sleeping 1s
DEBU RetryAfter timeout after 59 tries
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU (crc) DBG | time="2021-06-20T22:23:59+03:00" level=debug msg="Closing plugin on server side"
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU Running 'sw_vers -productVersion'
Error waiting for apiserver: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 124\n (x34)
Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err : Process exited with status 1\n (x25)
$ cat ~/.crc/machines/crc/console-ring
Stopped target Initrd File Systems.
[ 94.767142] systemd[1]: Listening on Device-mapper event daemon FIFOs.
[ OK ] Listening on Device-mapper event daemon FIFOs.
[ 94.769516] systemd[1]: Reached target Swap.
[ OK ] Reached target Swap.
[ 94.772848] systemd[1]: Mounting Temporary Directory (/tmp)...
Mounting Temporary Directory (/tmp)...
[ 94.775823] systemd[1]: Created slice system-getty.slice.
[ OK ] Created slice system-getty.slice.
[ 94.898803] systemd[1]: Listening on Process Core Dump Socket.
[ OK ] Listening on Process Core Dump Socket.
[ 94.901226] systemd[1]: Listening on initctl Compatibility Named Pipe.
[ OK ] Listening on initctl Compatibility Named Pipe.
[ 94.905462] systemd[1]: Starting Create list of required static device nodes for the current kernel...
Starting Create list of required st…ce nodes for the current kernel...
[ 94.912508] systemd[1]: Listening on LVM2 poll daemon socket.
[ OK ] Listening on LVM2 poll daemon socket.
[ 94.916880] systemd[1]: Mounting Huge Pages File System...
Mounting Huge Pages File System...
[ 94.920840] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Starting Monitoring of LVM2 mirrors…ng dmeventd or progress polling...
[ 94.924213] systemd[1]: Stopped target Initrd Root File System.
[ OK ] Stopped target Initrd Root File System.
[ 94.926673] systemd[1]: ostree-prepare-root.service: Succeeded.
[ 94.928578] systemd[1]: Stopped OSTree Prepare OS/.
[ OK ] Stopped OSTree Prepare OS/.
[ 94.930771] systemd[1]: ostree-prepare-root.service: Consumed 0 CPU time
[ 94.932734] systemd[1]: systemd-fsck-root.service: Succeeded.
[ 94.935343] systemd[1]: Stopped File System Check on Root Device.
[ OK ] Stopped File System Check on Root Device.
[ 94.938204] systemd[1]: systemd-fsck-root.service: Consumed 0 CPU time
[ 94.940422] systemd[1]: Listening on multipathd control socket.
[ OK ] Listening on multipathd control socket.
[ 94.947050] systemd[1]: Listening on udev Control Socket.
[ OK ] Listening on udev Control Socket.
[ 94.949121] systemd[1]: Reached target Remote File Systems.
[ OK ] Reached target Remote File Systems.
[ 94.953029] systemd[1]: Starting udev Coldplug all Devices...
Starting udev Coldplug all Devices...
[ 94.957129] systemd[1]: Mounting POSIX Message Queue File System...
Mounting POSIX Message Queue File System...
[ 94.960593] systemd[1]: Started Forward Password Requests to Clevis Directory Watch.
[ OK ] Started Forward Password Requests to Clevis Directory Watch.
[ 94.963757] systemd[1]: Reached target Local Encrypted Volumes (Pre).
[ OK ] Reached target Local Encrypted Volumes (Pre).
[ 94.966321] systemd[1]: Reached target Local Encrypted Volumes.
[ OK ] Reached target Local Encrypted Volumes.
[ 94.968540] systemd[1]: Reached target Remote Encrypted Volumes.
[ OK ] Reached target Remote Encrypted Volumes.
[ 94.976292] systemd[1]: sysroot-usr.mount: Succeeded.
[ 94.977751] systemd[1]: sysroot-usr.mount: Consumed 0 CPU time
[ 94.978750] systemd[1]: sysroot-etc.mount: Succeeded.
[ 94.980134] systemd[1]: sysroot-etc.mount: Consumed 0 CPU time
[ 94.981121] systemd[1]: sysroot-sysroot.mount: Succeeded.
[ 94.982614] systemd[1]: sysroot-sysroot.mount: Consumed 0 CPU time
[ 95.024337] systemd[1]: Started udev Coldplug all Devices.
[ OK ] Started udev Coldplug all Devices.
[ 95.027820] systemd[1]: Starting udev Wait for Complete Device Initialization...
Starting udev Wait for Complete Device Initialization...
[ 95.367238] systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
[ OK ] Started Monitoring of LVM2 mirrors,…sing dmeventd or progress polling.
[ 95.429047] systemd[1]: Mounted Kernel Debug File System.
[ OK ] Mounted Kernel Debug File System.
[ 95.431389] systemd[1]: Mounted Temporary Directory (/tmp).
[ OK ] Mounted Temporary Directory (/tmp).
[ 95.433670] systemd[1]: Mounted Huge Pages File System.
[ OK ] Mounted Huge Pages File System.
[ 95.435991] systemd[1]: Mounted POSIX Message Queue File System.
[ OK ] Mounted POSIX Message Queue File System.
[ 95.602648] systemd[1]: Started Load Kernel Modules.
[ OK ] Started Load Kernel Modules.
[ 95.606571] systemd[1]: Mounting FUSE Control File System...
Mounting FUSE Control File System...
[ 95.609965] systemd[1]: Starting Apply Kernel Variables...
Starting Apply Kernel Variables...
[ 95.616277] systemd[1]: Mounted FUSE Control File System.
[ OK ] Mounted FUSE Control File System.
[ 95.862017] systemd[1]: Started Create list of required static device nodes for the current kernel.
[ OK ] Started Create list of required sta…vice nodes for the current kernel.
[ 95.872456] systemd[1]: Starting Create Static Device Nodes in /dev...
Starting Create Static Device Nodes in /dev...
[ 96.076808] systemd[1]: Started Journal Service.
[ OK ] Started Journal Service.
[ OK ] Started Apply Kernel Variables.
[ OK ] Started Create Static Device Nodes in /dev.
Starting udev Kernel Device Manager...
[ OK ] Started udev Kernel Device Manager.
Starting Initialize the iWARP/InfiniBand/RDMA stack in the kernel...
[ 98.840604] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 163840 ms ovfl timer
[ 99.692180] input: PC Speaker as /devices/platform/pcspkr/input/input1
[ 100.158928] NET: Registered protocol family 40
[ OK ] Started udev Wait for Complete Device Initialization.
[ OK ] Reached target Local File Systems (Pre).
Starting File System Check on /dev/disk/by-label/boot...
Mounting /var...
[ OK ] Mounted /var.
Starting OSTree Remount OS/ Bind Mounts...
[ OK ] Started File System Check on /dev/disk/by-label/boot.
Mounting CoreOS Dynamic Mount for /boot...
[ 114.674280] Rounding down aligned max_sectors from 4294967295 to 4294967288
[ 114.675802] db_root: cannot open: /etc/target
[ 115.203990] iscsi: registered transport (iser)
[ 118.582409] EXT4-fs (vda3): mounted filesystem with ordered data mode. Opts: (null)
[ OK ] Mounted CoreOS Dynamic Mount for /boot.
[ 119.118651] RPC: Registered named UNIX socket transport module.
[ 119.119765] RPC: Registered udp transport module.
[ 119.120688] RPC: Registered tcp transport module.
[ 119.121572] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 119.140347] RPC: Registered rdma transport module.
[ 119.141215] RPC: Registered rdma backchannel transport module.
[ OK ] Started Initialize the iWARP/InfiniBand/RDMA stack in the kernel.
[ OK ] Started OSTree Remount OS/ Bind Mounts.
Starting Flush Journal to Persistent Storage...
Starting Load/Save Random Seed...
[ OK ] Reached target Local File Systems.
Starting Restore /run/initramfs on shutdown...
Starting Run update-ca-trust...
[ 124.557838] systemd-journald[818]: Received request to flush runtime journal from PID 1
[ OK ] Started Load/Save Random Seed.
[ OK ] Started Restore /run/initramfs on shutdown.
[ OK ] Started Flush Journal to Persistent Storage.
Starting Create Volatile Files and Directories...
[ OK ] Started Run update-ca-trust.
[ OK ] Started Create Volatile Files and Directories.
Mounting /etc/NetworkManager/system-connections-merged...
Starting Security Auditing Service...
[ OK ] Mounted /etc/NetworkManager/system-connections-merged.
[ OK ] Started Security Auditing Service.
Starting Update UTMP about System Boot/Shutdown...
[ OK ] Started Update UTMP about System Boot/Shutdown.
[ OK ] Reached target System Initialization.
[ OK ] Started daily update of the root trust anchor for DNSSEC.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Reached target Timers.
[ OK ] Started Monitor console-login-helpe…ue snippets directory for changes.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Listening on bootupd.socket.
[ OK ] Started OSTree Monitor Staged Deployment.
[ OK ] Reached target Paths.
[ OK ] Listening on Podman API Socket.
[ OK ] Reached target Sockets.
[ OK ] Reached target Basic System.
Starting NTP client/server...
[ OK ] Started irqbalance daemon.
Starting Open vSwitch Database Unit...
[ OK ] Started D-Bus System Message Bus.
[ OK ] Reached target sshd-keygen.target.
Starting Generate SSH keys snippet …a console-login-helper-messages...
Starting System Security Services Daemon...
Starting update of the root trust a…or DNSSEC validation in unbound...
[ OK ] Started Generate SSH keys snippet f…via console-login-helper-messages.
Starting Generate console-login-helper-messages issue snippet...
[ OK ] Started Generate console-login-helper-messages issue snippet.
[ OK ] Started NTP client/server.
[ OK ] Started update of the root trust an… for DNSSEC validation in unbound.
[ OK ] Started System Security Services Daemon.
[ OK ] Reached target User and Group Name Lookups.
Starting Login Service...
[ OK ] Started Login Service.
[ OK ] Started Open vSwitch Database Unit.
Starting Open vSwitch Delete Transient Ports...
[ OK ] Started Open vSwitch Delete Transient Ports.
Starting Open vSwitch Forwarding Unit...
[ 197.755001] openvswitch: Open vSwitch switching datapath
[ 200.103060] device ovs-system entered promiscuous mode
[ 200.107456] Timeout policy base is empty
[ 200.108204] Failed to associated timeout policy `ovs_test_tp'
[ 200.243326] device tun0 entered promiscuous mode
[ 200.543025] device vxlan_sys_4789 entered promiscuous mode
[ 200.617551] device br0 entered promiscuous mode
[ OK ] Started Open vSwitch Forwarding Unit.
Starting Open vSwitch...
[ OK ] Started Open vSwitch.
Starting Network Manager...
[ OK ] Started Network Manager.
Starting Network Manager Wait Online...
[ OK ] Reached target Network.
Starting OpenSSH server daemon...
Starting Permit User Sessions...
Starting Create dummy network...
[ OK ] Started Permit User Sessions.
[ OK ] Started Serial Getty on ttyS0.
[ OK ] Started Getty on tty1.
[ OK ] Reached target Login Prompts.
Starting Hostname Service...
[ OK ] Started OpenSSH server daemon.
[ OK ] Started Hostname Service.
Starting Network Manager Script Dispatcher Service...
[ OK ] Started Network Manager Script Dispatcher Service.
[ OK ] Started Network Manager Wait Online.
Starting Configures OVS with proper host networking configuration...
[ 207.748945] configure-ovs.sh[1543]: + touch /var/run/ovs-config-executed
[ 208.058732] IPv6: ADDRCONF(NETDEV_UP): eth10: link is not ready
[ OK ] Started Create dummy network.
[ 208.098410] configure-ovs.sh[1543]: + grep -q openvswitch
[ 208.100057] configure-ovs.sh[1543]: + rpm -qa
Red Hat Enterprise Linux CoreOS 47.83.202105220305-0 (Ootpa) 4.7
SSH host key: SHA256:Y3eJlcC9byTGAA**C2Jg (ED25519)
SSH host key: SHA256:R1pJ5d3FSY03K1**qHVcyQ (ECDSA)
SSH host key: SHA256:jWxFDiqjxTWh+*8qiD3JJ8 (RSA)
crc-pkjt4-master-0 login: [ 225.870614] configure-ovs.sh[1543]: + '[' OpenShiftSDN == OVNKubernetes ']'
[ 225.872254] configure-ovs.sh[1543]: + '[' OpenShiftSDN == OpenShiftSDN ']'
[ 225.873767] configure-ovs.sh[1543]: + iface=
[ 225.875111] configure-ovs.sh[1543]: + nmcli connection show ovs-port-phys0
[ 225.902313] configure-ovs.sh[1543]: + nmcli connection show ovs-if-phys0
[ 225.930031] configure-ovs.sh[1543]: + nmcli connection show ovs-port-br-ex
[ 225.959963] configure-ovs.sh[1543]: + nmcli connection show ovs-if-br-ex
[ 225.991421] configure-ovs.sh[1543]: + nmcli connection show br-ex
[ 226.020914] configure-ovs.sh[1543]: + rm -f /etc/NetworkManager/system-connections/br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-port-br-ex.nmconnection /etc/NetworkManager/system-connections/ovs-if-phys0.nmconnection /etc/NetworkManager/system-connections/ovs-port-phys0.nmconnection
[ 226.025934] configure-ovs.sh[1543]: + ovs-vsctl --timeout=30 --if-exists del-br br-int -- --if-exists del-br br-local -- --if-exists del-br br-ex
[ 226.038062] configure-ovs.sh[1543]: + [[ -n '' ]]
[ 261.914091] tun: Universal TUN/TAP device driver, 1.6
[ 280.127824] coreos-boot-mount-generator: using dev-disk-by\x2dlabel-boot for boot mount to /boot
[ 1101.334522] systemd[1]: [email protected]: Consumed 546ms CPU time
[ 1101.382782] systemd[1]: user-1000.slice: Consumed 610ms CPU time
[ 1101.425322] systemd[1]: run-user-1000.mount: Consumed 0 CPU time
[ 1101.471648] systemd[1]: [email protected]: Consumed 7ms CPU time
[ 1108.729046] systemd[1]: crio-542ab1d373c3913a7a64ffc69d53d9cb51b8e30f081df6baed77aaad84e19194.scope: Consumed 103ms CPU time
[ 1113.603500] systemd[1]: crio-conmon-542ab1d373c3913a7a64ffc69d53d9cb51b8e30f081df6baed77aaad84e19194.scope: Consumed 67ms CPU time
[ 1118.426284] systemd[1]: crio-7932f700d28a8f7470206ea865eb38443dd5ab84151608bb9a2850afb8aea1dd.scope: Consumed 990ms CPU time
[ 1118.461900] systemd[1]: crio-6aa9e7043d18f26bd60e0741b5c3ae0a49cfa23f7cd45d2c409174e7e4a1ffb3.scope: Consumed 1.731s CPU time
[ 1130.928140] systemd[1]: crio-conmon-7932f700d28a8f7470206ea865eb38443dd5ab84151608bb9a2850afb8aea1dd.scope: Consumed 79ms CPU time
[ 1133.282420] systemd[1]: crio-conmon-6aa9e7043d18f26bd60e0741b5c3ae0a49cfa23f7cd45d2c409174e7e4a1ffb3.scope: Consumed 74ms CPU time
[ 1135.454516] systemd[1]: crio-1187f0394fc53b639be9f85d3d9d9bc50675bbaf52648151a62dafb3204ca457.scope: Consumed 3.007s CPU time
[ 1139.784699] systemd[1]: crio-conmon-1187f0394fc53b639be9f85d3d9d9bc50675bbaf52648151a62dafb3204ca457.scope: Consumed 72ms CPU time
[ 1158.951169] device br0 left promiscuous mode
[ 1158.960649] device tun0 left promiscuous mode
[ 1158.971621] device vxlan_sys_4789 left promiscuous mode
[ 1158.980623] device ovs-system left promiscuous mode
[ 1158.998998] device ovs-system entered promiscuous mode
[ 1159.002424] No such timeout policy "ovs_test_tp"
[ 1159.003171] Failed to associated timeout policy `ovs_test_tp'
[ 1159.008635] device br0 entered promiscuous mode
[ 1162.108018] device vxlan_sys_4789 entered promiscuous mode
[ 1162.149383] device tun0 entered promiscuous mode
[ 1303.419561] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1303.434520] IPv6: ADDRCONF(NETDEV_UP): veth2616e6e0: link is not ready
[ 1303.436268] IPv6: ADDRCONF(NETDEV_CHANGE): veth2616e6e0: link becomes ready
[ 1303.437756] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1320.926419] device veth2616e6e0 entered promiscuous mode
[ 1321.978197] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1321.994197] IPv6: ADDRCONF(NETDEV_UP): veth66e160c2: link is not ready
[ 1321.995987] IPv6: ADDRCONF(NETDEV_CHANGE): veth66e160c2: link becomes ready
[ 1321.997372] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1322.312450] device veth66e160c2 entered promiscuous mode
[ 1324.303778] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1324.342921] IPv6: ADDRCONF(NETDEV_UP): veth39f472e0: link is not ready
[ 1324.344606] IPv6: ADDRCONF(NETDEV_CHANGE): veth39f472e0: link becomes ready
[ 1324.346281] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1324.413607] device veth39f472e0 entered promiscuous mode
[ 1327.334556] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1327.356392] IPv6: ADDRCONF(NETDEV_UP): veth17a173cc: link is not ready
[ 1327.357658] IPv6: ADDRCONF(NETDEV_CHANGE): veth17a173cc: link becomes ready
[ 1327.359131] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1327.404421] device veth17a173cc entered promiscuous mode
[ 1329.846169] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1329.855277] IPv6: ADDRCONF(NETDEV_UP): veth90ff1809: link is not ready
[ 1329.856654] IPv6: ADDRCONF(NETDEV_CHANGE): veth90ff1809: link becomes ready
[ 1329.857919] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1330.071490] device veth90ff1809 entered promiscuous mode
[ 1331.779373] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1331.792344] IPv6: ADDRCONF(NETDEV_UP): veth4f4efdab: link is not ready
[ 1331.812528] IPv6: ADDRCONF(NETDEV_CHANGE): veth4f4efdab: link becomes ready
[ 1331.814965] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1333.674146] device veth4f4efdab entered promiscuous mode
[ 1347.804902] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1347.855320] IPv6: ADDRCONF(NETDEV_UP): vetha5961711: link is not ready
[ 1347.870484] IPv6: ADDRCONF(NETDEV_CHANGE): vetha5961711: link becomes ready
[ 1347.872360] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1349.684576] device vetha5961711 entered promiscuous mode
[ 1364.271978] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1364.319427] IPv6: ADDRCONF(NETDEV_UP): veth3d0ce92a: link is not ready
[ 1364.326556] IPv6: ADDRCONF(NETDEV_CHANGE): veth3d0ce92a: link becomes ready
[ 1364.333227] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1390.189588] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1390.253682] IPv6: ADDRCONF(NETDEV_UP): veth3c82e9be: link is not ready
[ 1390.282564] IPv6: ADDRCONF(NETDEV_CHANGE): veth3c82e9be: link becomes ready
[ 1390.284047] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1394.475901] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1394.497309] IPv6: ADDRCONF(NETDEV_UP): vethc118cd66: link is not ready
[ 1394.502419] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1394.504898] IPv6: ADDRCONF(NETDEV_CHANGE): vethc118cd66: link becomes ready
[ 1394.507200] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1394.533041] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1397.634411] device veth3c82e9be entered promiscuous mode
[ 1400.949650] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1400.976178] IPv6: ADDRCONF(NETDEV_UP): vethfeb2d1dd: link is not ready
[ 1400.984985] IPv6: ADDRCONF(NETDEV_CHANGE): vethfeb2d1dd: link becomes ready
[ 1400.987309] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1402.003094] device vethc118cd66 entered promiscuous mode
[ 1403.587691] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1403.608878] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1403.622392] IPv6: ADDRCONF(NETDEV_UP): vethce77ab6a: link is not ready
[ 1403.624905] IPv6: ADDRCONF(NETDEV_CHANGE): vethce77ab6a: link becomes ready
[ 1403.626694] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1403.645657] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1403.967023] device vethf3b925c5 entered promiscuous mode
[ 1409.405847] device vethfeb2d1dd entered promiscuous mode
[ 1412.648548] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1412.651110] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1412.664437] IPv6: ADDRCONF(NETDEV_UP): veth7b3984ed: link is not ready
[ 1412.666641] IPv6: ADDRCONF(NETDEV_CHANGE): veth7b3984ed: link becomes ready
[ 1412.668367] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1412.690328] device vethce77ab6a entered promiscuous mode
[ 1412.722879] IPv6: ADDRCONF(NETDEV_UP): vethc804061b: link is not ready
[ 1412.724496] IPv6: ADDRCONF(NETDEV_CHANGE): vethc804061b: link becomes ready
[ 1412.726404] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1413.819733] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1413.842571] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1413.850756] IPv6: ADDRCONF(NETDEV_UP): veth19bbb730: link is not ready
[ 1413.852981] IPv6: ADDRCONF(NETDEV_CHANGE): veth19bbb730: link becomes ready
[ 1413.854524] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1413.873358] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1414.683884] device veth4f5285d9 entered promiscuous mode
[ 1416.219618] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1416.249004] IPv6: ADDRCONF(NETDEV_UP): veth9bc34325: link is not ready
[ 1416.250667] IPv6: ADDRCONF(NETDEV_CHANGE): veth9bc34325: link becomes ready
[ 1416.252392] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1416.268397] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1416.294913] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1416.300342] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1416.323771] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1420.004539] device veth7b3984ed entered promiscuous mode
[ 1426.964621] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1426.980710] IPv6: ADDRCONF(NETDEV_UP): veth54a303ea: link is not ready
[ 1426.983157] IPv6: ADDRCONF(NETDEV_CHANGE): veth54a303ea: link becomes ready
[ 1426.984688] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1429.742341] device vethc804061b entered promiscuous mode
[ 1435.136033] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1435.168394] IPv6: ADDRCONF(NETDEV_UP): veth70b0f090: link is not ready
[ 1435.169602] IPv6: ADDRCONF(NETDEV_CHANGE): veth70b0f090: link becomes ready
[ 1435.171084] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1442.132003] device veth7b3984ed left promiscuous mode
[ 1442.457939] device veth145b959b entered promiscuous mode
[ 1460.655430] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1460.657705] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1460.682453] IPv6: ADDRCONF(NETDEV_UP): veth0bb30377: link is not ready
[ 1460.685986] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1460.687534] IPv6: ADDRCONF(NETDEV_CHANGE): veth0bb30377: link becomes ready
[ 1460.689147] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1460.723116] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1460.730769] IPv6: ADDRCONF(NETDEV_UP): veth510339d6: link is not ready
[ 1460.737073] IPv6: ADDRCONF(NETDEV_CHANGE): veth510339d6: link becomes ready
[ 1460.738812] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1461.773854] device veth19bbb730 entered promiscuous mode
[ 1469.093901] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1469.101456] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1469.131787] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1469.135313] IPv6: ADDRCONF(NETDEV_UP): veth659ef19d: link is not ready
[ 1469.139781] IPv6: ADDRCONF(NETDEV_CHANGE): veth659ef19d: link becomes ready
[ 1469.141518] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1472.918232] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1472.931028] IPv6: ADDRCONF(NETDEV_UP): veth49c2ada8: link is not ready
[ 1472.935848] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1472.938562] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1472.939643] IPv6: ADDRCONF(NETDEV_CHANGE): veth49c2ada8: link becomes ready
[ 1472.941074] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1472.987746] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1472.996443] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1473.142238] device veth9bc34325 entered promiscuous mode
[ 1475.861764] device veth6acd2c7e entered promiscuous mode
[ 1479.105241] device veth0b85618b entered promiscuous mode
[ 1481.715703] device veth54a303ea entered promiscuous mode
[ 1482.806908] device veth70b0f090 entered promiscuous mode
[ 1483.055464] device veth4f5285d9 left promiscuous mode
[ 1483.972977] device veth0bb30377 entered promiscuous mode
[ 1484.942250] device veth3da09ceb entered promiscuous mode
[ 1486.700683] device veth510339d6 entered promiscuous mode
[ 1487.808397] device veth558601d3 entered promiscuous mode
[ 1490.269032] device veth659ef19d entered promiscuous mode
[ 1490.562941] device vethc804061b left promiscuous mode
[ 1492.921476] device veth49c2ada8 entered promiscuous mode
[ 1494.794953] device veth3177c197 entered promiscuous mode
[ 1495.343002] device veth1b14baed entered promiscuous mode
[ 1495.740123] device veth145b959b left promiscuous mode
[ 1498.083009] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1498.108635] IPv6: ADDRCONF(NETDEV_UP): vethacc024ff: link is not ready
[ 1498.110379] IPv6: ADDRCONF(NETDEV_CHANGE): vethacc024ff: link becomes ready
[ 1498.112107] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1498.415050] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1498.450193] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1498.452186] device vethacc024ff entered promiscuous mode
[ 1498.818220] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1498.834863] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1498.836325] device veth90e176b9 entered promiscuous mode
[ 1498.876298] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1498.887346] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1499.032056] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1499.033796] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1499.065315] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1499.070022] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1499.956600] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1499.972346] IPv6: ADDRCONF(NETDEV_UP): vethb7fc84ae: link is not ready
[ 1499.973657] IPv6: ADDRCONF(NETDEV_CHANGE): vethb7fc84ae: link becomes ready
[ 1499.975225] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1500.127587] device vethd9b861ff entered promiscuous mode
[ 1501.171411] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1501.194007] IPv6: ADDRCONF(NETDEV_UP): veth1e3df8c9: link is not ready
[ 1501.197030] IPv6: ADDRCONF(NETDEV_CHANGE): veth1e3df8c9: link becomes ready
[ 1501.198604] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1501.242657] device veth87b5f9ad entered promiscuous mode
[ 1501.578138] device veth238e6446 entered promiscuous mode
[ 1502.269712] device veth8c1c104a entered promiscuous mode
[ 1502.889143] device vethb7fc84ae entered promiscuous mode
[ 1503.957689] device veth1e3df8c9 entered promiscuous mode
or.
[ 4.901740] systemd[1]: Reached target Ignition Subsequent Boot Disk Setup.
[ OK ] Reached target Ignition Subsequent Boot Disk Setup.
[ 4.904010] systemd[1]: Listening on udev Control Socket.
[ OK ] Listening on udev Control Socket.
[ 4.905962] systemd[1]: Reached target Swap.
[ OK ] Reached target Swap.
[ OK ] Reached target Slices.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Listening on Open-iSCSI iscsid Socket.
[ OK ] Reached target Timers.
[ OK ] Listening on Journal Socket.
[ OK ] Started Memstrack Anylazing Service.
Starting Setup Virtual Console...
Starting Load Kernel Modules...
Starting Journal Service...
Starting Create list of required st…ce nodes for the current kernel...
[ OK ] Started Forward Password Requests to Clevis Directory Watch.
[ OK ] Listening on Open-iSCSI iscsiuio Socket.
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Reached target Local Encrypted Volumes.
[ OK ] Reached target Paths.
[ OK ] Listening on udev Kernel Socket.
[ OK ] Reached target Sockets.
Starting iSCSI UserSpace I/O driver...
[ OK ] Started Create list of required sta…vice nodes for the current kernel.
Starting Create Static Device Nodes in /dev...
[ OK ] Started Setup Virtual Console.
Starting dracut ask for additional cmdline parameters...
[ OK ] Started iSCSI UserSpace I/O driver.
[ OK ] Started dracut ask for additional cmdline parameters.
Starting dracut cmdline hook...
[ 5.181319] fuse: init (API version 7.31)
[ OK ] Started Load Kernel Modules.
Starting Apply Kernel Variables...
[ 5.236625] Loading iSCSI transport class v2.0-870.
[ OK ] Started Create Static Device Nodes in /dev.
[ 5.342917] systemd[1]: Started Apply Kernel Variables.
[ OK ] Started Apply Kernel Variables.
[ 5.528234] iscsi: registered transport (tcp)
[ 5.554075] systemd[1]: Started Journal Service.
[ OK ] Started Journal Service.
[ 5.555997] systemd-vconsole-setup[321]: KD_FONT_OP_GET failed while trying to get the font metadata: Function not implemented
[ 5.558287] systemd-vconsole-setup[321]: Fonts will not be copied to remaining consoles
[ 5.560241] systemd-modules-load[322]: Inserted module 'fuse'
[ 5.562292] dracut-cmdline[357]: dracut-47.83.202105220305-0 dracut-049-95.git20200804.el8_3.4
[ 5.563772] dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=dm_multipath earlyprintk=serial BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-3d116c3f18e7d4dc21dcc5953f042877c035e09e54b687268aefc4db876faf32/vmlinuz-4.18.0-240.22.1.el8_3.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/3d116c3f18e7d4dc21dcc5953f042877c035e09e54b687268aefc4db876faf32/0 root=UUID=1157d507-2596-4188-a8cc-adb244bf7be1 rw rootflags=prjquota
[ 5.582967] iscsi: registered transport (qla4xxx)
[ 5.583778] QLogic iSCSI HBA Driver
[ 5.591176] libcxgbi:libcxgbi_init_module: Chelsio iSCSI driver library libcxgbi v0.9.1-ko (Apr. 2015)
[ 5.625957] Chelsio T4-T6 iSCSI Driver cxgb4i v0.9.5-ko (Apr. 2015)
[ 5.627070] iscsi: registered transport (cxgb4i)
[ 5.634681] cnic: QLogic cnicDriver v2.5.22 (July 20, 2015)
[ 5.640244] QLogic NetXtreme II iSCSI Driver bnx2i v2.7.10.1 (Jul 16, 2014)
[ 5.641396] iscsi: registered transport (bnx2i)
[ 5.651402] iscsi: registered transport (be2iscsi)
[ 5.652168] In beiscsi_module_init, tt=000000002f1e5544
[ OK ] Started dracut cmdline hook.
[ 5.851773] systemd[1]: Started dracut cmdline hook.
[ 5.854019] systemd[1]: Starting dracut pre-udev hook...
Starting dracut pre-udev hook...
[ 5.912917] device-mapper: uevent: version 1.0.3
[ 5.913847] device-mapper: ioctl: 4.42.0-ioctl (2020-02-27) initialised: [email protected]
[[ 5.990712] systemd[1]: Started dracut pre-udev hook.
OK ] Started dracut pre-udev hook.
[ 5.993963] systemd[1]: Starting udev Kernel Device Manager...
Starting udev Kernel Device Manager...
[ OK 6.265832] systemd[1]: Started udev Kernel Device Manager.
m] Started udev Kernel Device Manager.
[ 6.269813] systemd[1]: Starting dracut pre-trigger hook...
Starting dracut pre-trigger hook...
[ 6.303402] dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation
[[ 6.523699] systemd[1]: Started dracut pre-trigger hook.
OK ] Started dracut pre-trigger hook.
[ 6.527110] systemd[1]: Starting udev Coldplug all Devices...
Starting udev Coldplug all Devices...
[ 6.665019] systemd[1]: Mounting Kernel Configuration File System...
Mounting Kernel Configuration File System...
[[ 6.672980] systemd[1]: Mounted Kernel Configuration File System.
OK ] Mounted Kernel Configuration File System.
[ 6.759545] systemd[1]: Started udev Coldplug all Devices.
[ OK ] Started udev Coldplug all Devices.
[ 6.762905] systemd[1]: Starting udev Wait for Complete Device Initialization...
Starting udev Wait for Complete Device Initialization...
[ 6.788755] virtio_blk virtio0: [vda] 65011712 512-byte logical blocks (33.3 GB/31.0 GiB)
[ 6.836754] systemd-udevd[598]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
[ 7.065823] vda: vda1 vda2 vda3 vda4
[ 7.577018] systemd[1]: Found device /dev/disk/by-uuid/1157d507-2596-4188-a8cc-adb244bf7be1.
[ OK ] Found device /dev/disk/by-uuid/1157d507-2596-4188-a8cc-adb244bf7be1.
[[ 7.579883] systemd[1]: Found device /dev/disk/by-label/root.
OK ] Found device /dev/disk/by-label/root.
[[ 7.582328] systemd[1]: Started udev Wait for Complete Device Initialization.
OK ] Started udev Wait for Complete Device Initialization.
[ 7.598295] systemd[1]: Starting Device-Mapper Multipath Device Controller...
Starting Device-Mapper Multipath Device Controller...
[ 7.602299] systemd[1]: Reached target Initrd Root Device.
[ OK ] Reached target Initrd Root Device.
[ 7.625946] systemd[1]: Started Device-Mapper Multipath Device Controller.
[ OK ] Started Device-Mapper Multipath Device Controller.
[ OK ] Reached target Local File Systems (Pre).
[ OK ] Reached target Local File Systems.
[ 7.632672] systemd[1]: Reached target Local File Systems (Pre).
[ 7.634042] systemd[1]: Reached target Local File Systems.
[ 7.635362] systemd[1]: Starting Create Volatile Files and Directories...
Starting Create Volatile Files and Directories...
Starting Open-iSCSI...
[ 7.639365] systemd[1]: Starting Open-iSCSI...
[ 7.710045] multipathd[620]: --------start up--------
[ 7.711197] multipathd[620]: read /etc/multipath.conf
[ 7.712446] multipathd[620]: /etc/multipath.conf does not exist, blacklisting all devices.
[ 7.714088] multipathd[620]: You can run "/sbin/mpathconf --enable" to create
[ 7.715508] multipathd[620]: /etc/multipath.conf. See man mpathconf(8) for more details
[ 7.721534] multipathd[620]: path checkers start up
[ 7.739326] multipathd[620]: /etc/multipath.conf does not exist, blacklisting all devices.
[ 7.741005] multipathd[620]: You can run "/sbin/mpathconf --enable" to create
[ 7.742682] multipathd[620]: /etc/multipath.conf. See man mpathconf(8) for more details
[ OK ] Started Create Volatile Files and Directories.
[ OK ] Reached target System Initialization.
[ OK ] Reached target Basic System.
[ 7.749031] systemd[1]: Started Create Volatile Files and Directories.
[ 7.750570] systemd[1]: Reached target System Initialization.
[ 7.751777] systemd[1]: Reached target Basic System.
[ 7.767174] iscsid[622]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
[ OK ] Started Open-iSCSI.
[ 7.770337] iscsid[622]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
[ 7.777740] iscsid[622]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Starting dracut initqueue hook...
[ 7.780152] iscsid[622]: If using hardware iscsi like qla4xxx this message can be ignored.
[ 7.781760] systemd[1]: Started Open-iSCSI.
[ 7.782703] iscsid[622]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
[ 7.784562] systemd[1]: Starting dracut initqueue hook...
[ OK ] Started dracut initqueue hook.
[ 7.796699] systemd[1]: Started dracut initqueue hook.
Starting dracut pre-mount hook...
[ 7.799674] systemd[1]: Starting dracut pre-mount hook...
[ 7.799756] systemd[1]: Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems.
[ 7.804272] systemd[1]: Reached target Remote File Systems.
[ OK ] Started dracut pre-mount hook.
[ 7.827076] systemd[1]: Started dracut pre-mount hook.
Starting File System Check on /dev/…507-2596-4188-a8cc-adb244bf7be1...
[ 7.829529] systemd[1]: Starting File System Check on /dev/disk/by-uuid/1157d507-2596-4188-a8cc-adb244bf7be1...
[ 7.927390] systemd-fsck[645]: /usr/sbin/fsck.xfs: XFS file system.
[ OK ] Started File System Check on /dev/d…7d507-2596-4188-a8cc-adb244bf7be1.
[ 7.929977] systemd[1]: Started File System Check on /dev/disk/by-uuid/1157d507-2596-4188-a8cc-adb244bf7be1.
[ 7.931821] systemd[1]: Mounting /sysroot...
Mounting /sysroot...
[ 8.059315] SGI XFS with ACLs, security attributes, no debug enabled
[ 8.065270] XFS (vda4): Mounting V5 Filesystem
[ 8.774709] XFS (vda4): Ending clean mount
[ 8.846682] XFS (vda4): Quotacheck needed: Please wait.
[ ***] A start job is running for /sysroot (55s / 1min 33s)[ 60.356197] XFS (vda4): Quotacheck: Done.
[ OK ] Mounted /sysroot[ 63.151637] systemd[1]: Mounted /sysroot.
.
[ 63.154432] systemd[1]: Starting OSTree Prepare OS/...
Starting OSTree Prepare OS/...
[ 63.246262] ostree-prepare-root[666]: Resolved OSTree target to: /sysroot/ostree/deploy/rhcos/deploy/83a10ad5556f852d3d151abaec59dac20ba1af7aa2fcd33b702330660cb80f4d.0
[ 63.280098] ostree-prepare-root[666]: sysroot configured read-only: 1, currently writable: 1
[ 63.290954] systemd[1]: sysroot-ostree-deploy-rhcos-deploy-83a10ad5556f852d3d151abaec59dac20ba1af7aa2fcd33b702330660cb80f4d.0-etc.mount: Succeeded.
[[ 63.293436] systemd[1]: sysroot-ostree-deploy-rhcos-deploy-83a10ad5556f852d3d151abaec59dac20ba1af7aa2fcd33b702330660cb80f4d.0.mount: Succeeded.
[ 63.295870] systemd[1]: Started OSTree Prepare OS/.
OK ] Started OSTree Prepare OS/.
[ OK [ 63.298145] systemd[1]: Reached target Initrd Root File System.
] Reached target Initrd Root File System.
[ 63.301048] systemd[1]: Starting Reload Configuration from the Real Root...
Starting Reload Configuration from the Real Root...
[[ 63.303765] systemd[1]: Reached target Subsequent (Not Ignition) boot complete.
OK ] Reached target Subsequent (Not Ignition) boot complete.
[ 63.374743] systemd[1]: Reloading.
[ 63.869353] systemd[1]: initrd-parse-etc.service: Succeeded.
[ OK [ 63.870589] systemd[1]: Started Reload Configuration from the Real Root.
] Started Reload Configuration from the Real Root.
[[ 63.872951] systemd[1]: Reached target Initrd File Systems.
OK ] Reached target Initrd File Systems.
[[ 63.874920] systemd[1]: Reached target Initrd Default Target.
OK ] Reached target Initrd Default Target.
[ 63.878609] systemd[1]: Starting dracut pre-pivot and cleanup hook...
Starting dracut pre-pivot and cleanup hook...
[ 64.054925] dracut-pre-pivot[736]: Jun 20 19:13:45 | /etc/multipath.conf does not exist, blacklisting all devices.
[ 64.056698] dracut-pre-pivot[736]: Jun 20 19:13:45 | /etc/multipath.conf does not exist, blacklisting all devices.
[ 64.060482] dracut-pre-pivot[736]: Jun 20 19:13:45 | You can run "/sbin/mpathconf --enable" to create
[ 64.061989] dracut-pre-pivot[736]: Jun 20 19:13:45 | You can run "/sbin/mpathconf --enable" to create
[ 64.063512] dracut-pre-pivot[736]: Jun 20 19:13:45 | /etc/multipath.conf. See man mpathconf(8) for more details
[ 64.065100] dracut-pre-pivot[736]: Jun 20 19:13:45 | /etc/multipath.conf. See man mpathconf(8) for more details
[[ 64.073176] systemd[1]: Started dracut pre-pivot and cleanup hook.
OK ] Started dracut pre-pivot and cleanup hook.
[ 64.076283] systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Starting Cleaning Up and Shutting Down Daemons...
[ OK ] Stopped Forwa[ 64.161013] systemd[1]: clevis-luks-askpass.path: Succeeded.
rd Password Requ[ 64.162316] systemd[1]: Stopped Forward Password Requests to Clevis Directory Watch.
ests to Clevis Directory Watch.
[ 64.164766] systemd[1]: dracut-pre-pivot.service: Succeeded.
[[ 64.165889] systemd[1]: Stopped dracut pre-pivot and cleanup hook.
OK ] Stopped dracut pre-pivot and cleanup hook.
[ 64.168649] systemd[1]: dracut-pre-mount.service: Succeeded.
[[ 64.169876] systemd[1]: Stopped dracut pre-mount hook.
OK ] Stopped dracut pre-mount hook.
[ OK 64.173492] systemd[1]: Stopped target Initrd Default Target.
m] Stopped target Initrd Default Target.
[[ 64.177324] systemd[1]: Stopped target Initrd Root Device.
OK ] Stopped target Initrd Root Device.
[[ 64.179261] systemd[1]: Stopped target Basic System.
OK ] Stopped target Basic System.
[[ 64.181196] systemd[1]: Stopped target Slices.
OK ] Stopped target Slices.
[ OK 64.182922] systemd[1]: Stopped target Subsequent (Not Ignition) boot complete.
m] Stopped target Subsequent (Not Ignition) boot complete.
[ OK [ 64.185176] systemd[1]: Stopped target Paths.
] Stopped target Paths.
[[ 64.186692] systemd[1]: Stopped target Sockets.
OK ] Stopped target Sockets.
[[ 64.188414] systemd[1]: Stopped target System Initialization.
OK ] Stopped target System Initialization.
[[ 64.190242] systemd[1]: Stopped target Local Encrypted Volumes.
OK ] Stopped target Local Encrypted Volumes.
[ 64.192526] systemd[1]: systemd-ask-password-console.path: Succeeded.
[ OK ] Stopped Dispa[ 64.195007] systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
tch Password Requests to Console Directory Watch.
[ 64.197636] systemd[1]: systemd-tmpfiles-setup.service: Succeeded.
[[ 64.198844] systemd[1]: Stopped Create Volatile Files and Directories.
OK ] Stopped Create Volatile Files and Directories.
[[ 64.201031] systemd[1]: Stopped target Local File Systems.
OK ] Stopped target Local File Systems.
[[ 64.203003] systemd[1]: Stopped target Local File Systems (Pre).
OK ] Stopped target Local File Systems (Pre).
[[ 64.204927] systemd[1]: Stopped target Ignition Subsequent Boot Disk Setup.
OK ] Stopped target Ignition Subsequent Boot Disk Setup.
[[ 64.207270] systemd[1]: Stopped target Swap.
OK ] Stopped target Swap.
[ OK ] Stopped targe[ 64.296952] systemd[1]: Stopped target Timers.
t Timers.
[ 64.299295] systemd[1]: systemd-sysctl.service: Succeeded.
[ OK 64.300651] systemd[1]: Stopped Apply Kernel Variables.
m] Stopped Apply Kernel Variables.
[ 64.303044] systemd[1]: systemd-modules-load.service: Succeeded.
[ 64.304313] systemd[1]: Stopped Load Kernel Modules.
[ OK ] Stopped Load Kernel Modules.
[[ 64.306307] systemd[1]: Stopped target Remote File Systems.
OK ] Stopped target Remote File Systems.
[ OK ] Stopped targe[ 64.308578] systemd[1]: Stopped target Remote File Systems (Pre).
t Remote File Systems (Pre).
[ 64.310541] systemd[1]: dracut-initqueue.service: Succeeded.
[[ 64.311602] systemd[1]: Stopped dracut initqueue hook.
OK ] Stopped dracut initqueue hook.
Stopping Open-iSCSI...[ 64.314187] systemd[1]: Stopping Open-iSCSI...
[ 64.315478] iscsid[622]: iscsid: iscsid shutting down.
[ 64.318275] systemd[1]: iscsid.service: Succeeded.
[ 64.319431] systemd[1]: Stopped Open-iSCSI.
[ OK ] Stopped Open-iSCSI.
[ OK 64.322676] systemd[1]: initrd-cleanup.service: Succeeded.
m] Started Cleaning Up and Shutting[ 64.324167] systemd[1]: Started Cleaning Up and Shutting Down Daemons.
Down Daemons.
[ 64.327923] systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Stopping Device-Mapper Multipath Device Controller...
Stopping iSCSI U[ 64.331186] systemd[1]: Stopping iSCSI UserSpace I/O driver...
serSpace I/O driver...
[ 64.336313] systemd[1]: iscsid.socket: Succeeded.
[ OK [ 64.340955] systemd[1]: Closed Open-iSCSI iscsid Socket.
] Closed Open-iSCSI iscsid Socket.
[ 64.345213] systemd[1]: iscsiuio.service: Succeeded.
[ OK ] Stopped iSCSI UserSpace I/O driv[ 64.347846] systemd[1]: Stopped iSCSI UserSpace I/O driver.
er.
[[ 64.350281] systemd[1]: iscsiuio.socket: Succeeded.
OK ] Closed Open-iSCSI iscsiuio Socket.
[ 64.358077] systemd[1]: Closed Open-iSCSI iscsiuio Socket.
[ 64.402877] multipathd[769]: Jun 20 19:13:45 | /etc/multipath.conf does not exist, blacklisting all devices.
[ 64.405544] multipathd[769]: Jun 20 19:13:45 | You can run "/sbin/mpathconf --enable" to create
[ 64.407331] multipathd[769]: Jun 20 19:13:45 | /etc/multipath.conf. See man mpathconf(8) for more details
[ 64.409603] multipathd[620]: --------shut down-------
[ 64.410674] multipathd[769]: ok
[ OK ] Stopped Device-Mapper Multipath Device Controlle[ 64.412981] systemd[1]: multipathd.service: Succeeded.
r.
[ OK ] Stopped udev Wai[ 64.415867] systemd[1]: Stopped Device-Mapper Multipath Device Controller.
t for Complete D[ 64.417301] systemd[1]: systemd-udev-settle.service: Succeeded.
evice Initializa[ 64.418678] systemd[1]: Stopped udev Wait for Complete Device Initialization.
tion.
[ OK ] Stopped udev Coldplug all Dev[ 64.421720] systemd[1]: systemd-udev-trigger.service: Succeeded.
ices.
[ OK ] Stopped dracut pre-trigger hook.
Stopping udev Kernel Device Manager...
[ 64.426189] systemd[1]: Stopped udev Coldplug all Devices.
[ 64.427391] systemd[1]: dracut-pre-trigger.service: Succeeded.
[ 64.430554] systemd[1]: Stopped dracut pre-trigger hook.
[ 64.431790] systemd[1]: Stopping udev Kernel Device Manager...
[ OK ] Stopped udev Kernel Device Manager.
[ OK ] Stopped dracut pre-udev hook.
[ OK ] Stopped dracut cmdline hook.
[ OK ] Stopped dracut ask for additional cmdline parameters.
[ OK ] Stopped Create Static Device [ 64.449174] systemd[1]: systemd-udevd.service: Succeeded.
Nodes in /dev.
[ OK ] Stopped Create list of required sta…vice nodes[ 64.454976] systemd[1]: Stopped udev Kernel Device Manager.
for the current kernel.
[ 64.456280] systemd[1]: dracut-pre-udev.service: Succeeded.
[ OK ] Closed udev Control Socket.
[ OK ] Closed udev Kernel Socket.[ 64.461184] systemd[1]: Stopped dracut pre-udev hook.
[ 64.464555] systemd[1]: dracut-cmdline.service: Succeeded.
[ 64.465728] systemd[1]: Stopped dracut cmdline hook.
[ 64.466790] systemd[1]: dracut-cmdline-ask.service: Succeeded.
Starting Cleanup udevd DB...
[ 64.470444] systemd[1]: Stopped dracut ask for additional cmdline parameters.
[ 64.471976] systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded.
[ 64.473408] systemd[1]: Stopped Create Static Device Nodes in /dev.
[ 64.474746] systemd[1]: kmod-static-nodes.service: Succeeded.
[ 64.476045] systemd[1]: Stopped Create list of required static device nodes for the current kernel.
[ 64.477868] systemd[1]: systemd-udevd-control.socket: Succeeded.
[ 64.479146] systemd[1]: Closed udev Control Socket.
[ 64.480815] systemd[1]: systemd-udevd-kernel.socket: Succeeded.
[ 64.482392] systemd[1]: Closed udev Kernel Socket.
[ 64.483368] systemd[1]: Starting Cleanup udevd DB...
[ OK ] Started Cleanup udevd DB.
[ OK ] Reached targe[ 64.574392] systemd[1]: initrd-udevadm-cleanup-db.service: Succeeded.
t Switch Root.
[ 64.575852] systemd[1]: Started Cleanup udevd DB.
[ 64.576782] systemd[1]: Reached target Switch Root.
[ 64.578289] systemd[1]: Starting Switch Root...
Starting Switch Root...
[ 64.665487] systemd[1]: Switching root.
[ 64.682524] systemd-journald[323]: Received SIGTERM from PID 1 (systemd).
[ 65.074742] printk: systemd: 30 output lines suppressed due to ratelimiting
[ 74.947593] audit: type=1404 audit(1624216435.921:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
[ 76.824302] SELinux: policy capability network_peer_controls=1
[ 76.825243] SELinux: policy capability open_perms=1
[ 76.826028] SELinux: policy capability extended_socket_class=1
[ 76.826957] SELinux: policy capability always_check_network=0
[ 76.827872] SELinux: policy capability cgroup_seclabel=1
[ 76.828907] SELinux: policy capability nnp_nosuid_transition=1
[ 76.847701] audit: type=1403 audit(1624216437.822:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
[ 76.850067] systemd[1]: Successfully loaded SELinux policy in 2.060829s.
[ 79.534870] systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 31.021ms.
[ 79.708813] systemd[1]: systemd 239 (239-41.el8_3.2) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)
[ 79.713715] systemd[1]: Detected virtualization bhyve.
[ 79.714729] systemd[1]: Detected architecture x86-64.
Welcome to Red Hat Enterprise Linux CoreOS 47.83.202105220305-0 (Ootpa)!
[ 79.811490] systemd[1]: Set hostname to <crc-pkjt4-master-0>.
[ 86.041898] coreos-boot-mount-generator: using dev-disk-by\x2dlabel-boot for boot mount to /boot
[ 94.637904] systemd[1]: /usr/lib/systemd/system/bootupd.service:22: Unknown lvalue 'ProtectHostname' in section 'Service'
[ 94.706999] systemd[1]: systemd-journald.service: Succeeded.
[ 94.708841] systemd[1]: systemd-journald.service: Consumed 0 CPU time
[ 94.710252] systemd[1]: initrd-switch-root.service: Succeeded.
[ 94.711804] systemd[1]: Stopped Switch Root.
[ OK ] Stopped Switch Root.
[ 94.713415] systemd[1]: initrd-switch-root.service: Consumed 0 CPU time
[ 94.714944] systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart.
[ 94.716659] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
[ 94.718114] systemd[1]: Stopped Journal Service.
[ OK ] Stopped Journal Service.
[ 94.719585] systemd[1]: systemd-journald.service: Consumed 0 CPU time
[ 94.723068] systemd[1]: Starting Journal Service...
Starting Journal Service...
[ 94.725691] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ 94.729013] systemd[1]: Created slice system-serial\x2dgetty.slice.
[ OK ] Created slice system-serial\x2dgetty.slice.
[ 94.731716] systemd[1]: Stopped target Switch Root.
[ OK ] Stopped target Switch Root.
[ 94.734571] systemd[1]: Created slice system-sshd\x2dkeygen.slice.
[ OK ] Created slice system-sshd\x2dkeygen.slice.
[ 94.738205] systemd[1]: Created slice User and Session Slice.
[ OK ] Created slice User and Session Slice.
[ 94.740580] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[ OK ] Started Forward Password Requests to Wall Directory Watch.
[ 94.743787] systemd[1]: Created slice system-systemd\x2dfsck.slice.
[ OK ] Created slice system-systemd\x2dfsck.slice.
[ 94.747022] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[ OK ] Set up automount Arbitrary Executab…rmats File System Automount Point.
[ 94.751600] systemd[1]: Starting Load Kernel Modules...
Starting Load Kernel Modules...
[ 94.753345] systemd[1]: Reached target Slices.
[ OK ] Reached target Slices.
[ 94.756407] systemd[1]: Listening on udev Kernel Socket.
[ OK ] Listening on udev Kernel Socket.
[ 94.760217] systemd[1]: Mounting Kernel Debug File System...
Mounting Kernel Debug File System...
[ 94.762425] systemd[1]: Stopped target Initrd File Systems.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment