Last active
July 11, 2022 18:53
-
-
Save jseguillon/e1cc4f500ac81f2b40fc7528407cf451 to your computer and use it in GitHub Desktop.
last ok molecule-kubevirt action
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Starting virtctl console | |
Script started, file is typescript | |
Successfully connected to instance console. The escape sequence is ^] | |
[ 0.000000] Linux version 5.6.6-300.fc32.x86_64 ([email protected]) (gcc version 10.0.1 20200328 (Red Hat 10.0.1-0.11) (GCC)) #1 SMP Tue Apr 21 13:44:19 UTC 2020 | |
[ 0.000000] Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.6.6-300.fc32.x86_64 root=UUID=d1b37ed4-3bbb-40b2-a6ba-f377f0c90217 ro no_timer_check net.ifnames=0 console=tty1 console=ttyS0,115200n8 | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' | |
[ 0.000000] x86/fpu: xstate_offset[3]: 960, xstate_sizes[3]: 64 | |
[ 0.000000] x86/fpu: xstate_offset[4]: 1024, xstate_sizes[4]: 64 | |
[ 0.000000] x86/fpu: xstate_offset[9]: 2688, xstate_sizes[9]: 8 | |
[ 0.000000] x86/fpu: Enabled xstate features 0x21b, context size is 2696 bytes, using 'standard' format. | |
[ 0.000000] BIOS-provided physical RAM map: | |
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable | |
[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable | |
[ 0.000000] BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved | |
[ 0.000000] NX (Execute Disable) protection: active | |
[ 0.000000] SMBIOS 2.8 present. | |
[ 0.000000] DMI: KubeVirt None/RHEL-AV, BIOS 1.14.0-1.el8s 04/01/2014 | |
[ 0.000000] last_pfn = 0x7ffdd max_arch_pfn = 0x10000000000 | |
[ 0.000000] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT | |
[ 0.000000] found SMP MP-table at [mem 0x000f5c20-0x000f5c2f] | |
[ 0.000000] Using GB pages for direct mapping | |
[ 0.000000] RAMDISK: [mem 0x34f54000-0x367a1fff] | |
[ 0.000000] ACPI: Early table checksum verification disabled | |
[ 0.000000] ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) | |
[ 0.000000] ACPI: RSDT 0x000000007FFE1FF9 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) | |
[ 0.000000] ACPI: FACP 0x000000007FFE1E29 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) | |
[ 0.000000] ACPI: DSDT 0x000000007FFE0040 001DE9 (v01 BOCHS BXPC 00000001 BXPC 00000001) | |
[ 0.000000] ACPI: FACS 0x000000007FFE0000 000040 | |
[ 0.000000] ACPI: APIC 0x000000007FFE1F1D 000078 (v01 BOCHS BXPC 00000001 BXPC 00000001) | |
[ 0.000000] ACPI: MCFG 0x000000007FFE1F95 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) | |
[ 0.000000] ACPI: WAET 0x000000007FFE1FD1 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) | |
[ 0.000000] No NUMA configuration found | |
[ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] | |
[ 0.000000] NODE_DATA(0) allocated [mem 0x7ffb2000-0x7ffdcfff] | |
[ 0.000000] Zone ranges: | |
[ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff] | |
[ 0.000000] DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] | |
[ 0.000000] Normal empty | |
[ 0.000000] Device empty | |
[ 0.000000] Movable zone start for each node | |
[ 0.000000] Early memory node ranges | |
[ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009efff] | |
[ 0.000000] node 0: [mem 0x0000000000100000-0x000000007ffdcfff] | |
[ 0.000000] Zeroed struct page in unavailable ranges: 133 pag | |
[ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] | |
[ 0.000000] ACPI: PM-Timer IO Port: 0x608 | |
[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) | |
[ 0.000000] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 | |
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) | |
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) | |
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) | |
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) | |
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) | |
[ 0.000000]ng ACPI (MADT) for SMP configuration information | |
[ 0.000000] smpboot: Allowing 1 CPUs, 0 hotplug CPUs | |
[ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] | |
[ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff 0.000000] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] | |
[ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] | |
[ 0.000000] [mem 0xc0000000-0xfed1bfff] available for PCI devices | |
[ 0.000000] Booting paravirtualized kernel on bare hardware | |
[ 0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns | |
[ 0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:1 nr_cpu_ids:1 nr_node_ids:1 | |
[ 0.000000] percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u2097152 | |
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 515942 | |
[ 0.000000] Policy zone: DMA32 | |
[ 0.000000] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.6.6-300.fc32.x86_64 root=UUID=d1b37ed4-3bbb-40b2-a6ba-f377f0c90217 ro no_k net.ifnames=0 console=tty1 console=ttyS0,115200n8 | |
[ 0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) | |
[ 0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) | |
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off | |
[ 0.000000] Memory: 1993340K/2096620K available (14339K kernel code, 2400K rwdata, 4868K rodata, 2452K init, 6136K bss, 103280K reserved, 0K cma-reserved) | |
[ 0.000000] random: get_random_u64 called from __kmem_cache_create+0x3e/0x620 with crng_init=0 | |
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 | |
[ 0.000000] ftrace: allocating 40698 entries in 159 pages | |
[ 0.000000] ftrace: allocated 159 pages with 6 groups | |
[ 0.000000] rcu: Hierarchical RCU implementation. | |
[ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=1. | |
[ 0.000000] Tasks RCU enabled. | |
[ .000000] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. | |
[ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1 | |
[ 0.000000] NR_IRQS: 524544, nr_irqs: 256, preallocated irqs: 16 | |
[ 0.000000] random: crng done (trusting CPU's manufacturer) | |
[ 0.000000] Console: colour VGA+ 80x25 | |
[ 0.000000] printk: console [tty1] enabled | |
[ 0.000000] printk: console [ttyS0] enabled | |
[ 0.000000] ACPI: Core revision 20200110 | |
[ 0.003000] APIC: Switch to symmetric I/O mode setup | |
[ 0.008000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 | |
[ 0.012000] tsc: Unable to calibrate against PIT | |
[ 0.013000] tsc: using PMTIMER reference calibration | |
[ 0.013000] tsc: Detected 2095.032 MHz processor | |
[ 0.001524] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e32db87ee8, max_idle_ns: 440795252272 ns | |
[ 0.002876] Calibrating delay loop (skipped), value calculated using timer frequency.. 4190.06 BogoMIPS (lpj=2095032) | |
[ 0.004142] pid_max: default: 32768 minimum: 301 | |
[ 0.006042] LSM: Security Framework initializing | |
[ 0.007905] Yama: becoming mindful. | |
[ 0.009428] SELinux: Initializing. | |
[ 0.010244] *** VALIDATE selinux *** | |
[ 0.011456] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) | |
[ 0.011683] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) | |
[ 0.015044] *** VALIDATE tmpfs *** | |
[ 0.027687] *** VALIDATE proc *** | |
[ 0.036078] *** VALIDATE cgroup *** | |
[ 0.036343] *** VALIDATE cgroup2 *** | |
[ 0.051583] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 | |
[ 0.053681] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 | |
[ 0.055155] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization | |
[ 0.056144] Spectre V2 : Mitigation: Full AMD retpoline | |
[ 0.056624] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch | |
[ 0.057164] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp | |
[ 0.519777] Freeing SMP alternatives memory: 36K | |
[ 0.669583] smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x1, stepping: 0x2) | |
[ 0.685021] Performance Events: PMU not available due to virtualization, using software events only. | |
[ 0.688857] rcu: Hierarchical SRCU implementation. | |
[ 0.701650] NMI watchdog: Perf NMI watchdog permanently disabled | |
[ 0.704287] smp: Bringing up secondary CPUs ... | |
[ 0.704583] smp: Brought up 1 node, 1 CPU | |
[ 0.704583] smpboot: Max logical packages: 1 | |
[ 0.704737] smpboot: Total of 1 processors activated (4190.06 BogoMIPS) | |
[ 0.733318] devtmpfs: initialized | |
[ 0.740627] x86/mm: Memory block size: 128MB | |
[ 0.755484] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns | |
[ 0.756937] futex hash table entries: 256 (order: 2, 16384 bytes, linear) | |
[ 0.764211] pinctrl core: initialized pinctrl subsystem | |
[ 0.775894] PM: RTC time: 00:18:59, date: 2022-06-20 | |
[ 0.776560] thermal_sys: Registered thermal governor 'fair_share' | |
[ 0.776622] thermal_sys: Registered thermal governor 'bang_bang' | |
[ 0.776921] thermal_sys: Registered thermal governor 'step_wise' | |
[ 0.777163] thermal_sys: Registered thermal governor 'user_space' | |
[ 0.787345] NET: Registered protocol family 16 | |
[ 0.791609] audit: initializing netlink subsys (disabled) | |
[ 0.797174] audit: type=2000 audit(1655684338.806:1): state=initialized audit_enabled=0 res=1 | |
[ 0.798177] cpuidle: using governor menu | |
[ 0.802260] ACPI: bus type PCI registered | |
[ 0.802651] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 | |
[ 0.807303] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) | |
[ 0.808167] PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 | |
[ 0.810896] PCI: Using configuration type 1 for base access | |
[ 0.843803] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages | |
[ 0.844155] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages | |
[ 1.355744] cryptd: max_cpu_qlen set to 1000 | |
[ 1.400141] alg: No test for 842 (842-generic) | |
[ 1.401374] alg: No test for 842 (842-scomp) | |
[ 1.462085] ACPI: Added _OSI(Module Device) | |
[ 1.462371] ACPI: Added _OSI(Processor Device) | |
[ 1.462565] ACPI: Added _OSI(3.0 _SCP Extensions) | |
[ 1.462609] ACPI: Added _OSI(Processor Aggregator Device) | |
[ 1.462953] ACPI: Added _OSI(Linux-Dell-Video) | |
[ 1.463149] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) | |
[ 1.463384] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) | |
[ 1.485583] ACPI: 1 ACPI AML tables successfully acquired and loaded | |
[ 1.506679] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc-early' as unstable because the skew is too large: | |
[ 1.507249] clocksource: 'refined-jiffies' wd_now: fffb7210 wd_last: fffb7020 mask: ffffffff | |
[ 1.507625] clocksource: 'tsc-early' cs_now: 5fb996a17 cs_last: 5b062e6d3 mask: ffffffffffffffff | |
[ 1.508202] tsc: Marking TSC unstable due to clocksource watchdog | |
[ 1.510108] ACPI: Interpreter enabled | |
[ 1.512334] ACPI: (supports S0 S5) | |
[ 1.512678] ACPI: Using IOAPIC for interrupt routing | |
[ 1.514583] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug | |
[ 1.515583] ACPI: Enabled 1 GPEs in block 00 to 3F | |
[ 1.546079] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) | |
[ 1.546891] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] | |
[ 1.550156] acpi PNP0A08:00: _OSC: platform does not support [LTR] | |
[ 1.552397] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME AER PCIeCapability] | |
[ 1.552923] PCI host bridge to bus 0000:00 | |
[ 1.554042] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] | |
[ 1.554358] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] | |
[ 1.554612] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] | |
[ 1.555635] pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] | |
[ 1.555983] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] | |
[ 1.556618] pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] | |
[ 1.557760] pci_bus 0000:00: root bus resource [bus 00-ff] | |
[ 1.559003] pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 | |
[ 1.565876] pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 | |
[ 1.568965] pci 0000:00:01.0: reg 0x10: [mem 0xfb000000-0xfbffffff pref] | |
[ 1.572659] pci 0000:00:01.0: reg 0x18: [mem 0xfea10000-0xfea10fff] | |
[ 1.580663] pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] | |
[ 1.582847] pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 | |
[ 1.583583] pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] | |
[ 1.584678] pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 | |
[ 1.587271] pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] | |
[ 1.592410] pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 | |
[ 1.596281] pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] | |
[ 1.600397] pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 | |
[ 1.602657] pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] | |
[ 1.610252] pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 | |
[ 1.612660] pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] | |
[ 1.618801] pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 | |
[ 1.621287] pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] | |
[ 1.625949] pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 | |
[ 1.628277] pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] | |
[ 1.633827] pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 | |
[ 1.634583] pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO | |
[ 1.635849] pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 | |
[ 1.643270] pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] | |
[ 1.644652] pci 0000:00:1f.2: reg 0x24: [mem 0xfea18000-0xfea18fff] | |
[ 1.647329] pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 | |
[ 1.650431] pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] | |
[ 1.656901] pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 | |
[ 1.659677] pci 0000:01:00.0: reg 0x14: [mem 0xfe800000-0xfe800fff] | |
[ 1.664766] pci 0000:01:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] | |
[ 1.670183] pci 0000:00:02.0: PCI bridge to [bus 01] | |
[ 1.670583] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] | |
[ 1.670754] pci 0000:00:02.0: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] | |
[ 1.672848] pci 0000:02:00.0: [1af4:1048] type 00 class 0x010000 | |
[ 1.675668] pci 0000:02:00.0: reg 0x14: [mem 0xfe600000-0xfe600fff] | |
[ 1.679662] pci 0000:02:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] | |
[ 1.684713] pci 0000:00:02.1: PCI bridge to [bus 02] | |
[ 1.685017] pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] | |
[ 1.685614] pci 0000:00:02.1: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] | |
[ 1.688638] pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 | |
[ 1.691643] pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] | |
[ 1.695671] pci 0000:03:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] | |
[ 1.700226] pci 0000:00:02.2: PCI bridge to [bus 03] | |
[ 1.700507] pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] | |
[ 1.700619] pci 0000:00:02.2: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] | |
[ 1.702749] pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 | |
[ 1.706662] pci 0000:04:00.0: reg 0x14: [mem 0xfe200000-0xfe200fff] | |
[ 1.712163] pci 0000:04:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] | |
[ 1.714949] pci 0000:00:02.3: PCI bridge to [bus 04] | |
[ 1.715251] pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] | |
[ 1.715583] pci 0000:00:02.3: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] | |
[ 1.717010] pci 0000:05:00.0: [1af4:1042] type 00 class 0x010000 | |
[ 1.720193] pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] | |
[ 1.725157] pci 0000:05:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] | |
[ 1.727968] pci 0000:00:02.4: PCI bridge to [bus 05] | |
[ 1.728277] pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] | |
[ 1.728628] pci 0000:00:02.4: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] | |
[ 1.730519] pci 0000:06:00.0: [1af4:1045] type 00 class 0x00ff00 | |
[ 1.735955] pci 0000:06:00.0: reg 0x20: [mem 0xfc200000-0xfc203fff 64bit pref] | |
[ 1.738267] pci 0000:00:02.5: PCI bridge to [bus 06] | |
[ 1.738633] pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] | |
[ 1.738944] pci 0000:00:02.5: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] | |
[ 1.740892] pci 0000:00:02.6: PCI bridge to [bus 07] | |
[ 1.741189] pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] | |
[ 1.741481] pci 0000:00:02.6: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] | |
[ 1.752616] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) | |
[ 1.754336] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) | |
[ 1.755237] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) | |
[ 1.756256] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) | |
[ 1.758385] ACPI: PCI Interrupt Link [LNKE] (IRQs 5 *10 11) | |
[ 1.759283] ACPI: PCI Interrupt Link [LNKF] (IRQs 5 *10 11) | |
[ 1.760298] ACPI: PCI Interrupt Link [LNKG] (IRQs 5 10 *11) | |
[ 1.762441] ACPI: PCI Interrupt Link [LNKH] (IRQs 5 10 *11) | |
[ 1.763674] ACPI: PCI Interrupt Link [GSIA] (IRQs *16) | |
[ 1.764091] ACPI: PCI Interrupt Link [GSIB] (IRQs *17) | |
[ 1.764445] ACPI: PCI Interrupt Link [GSIC] (IRQs *18) | |
[ 1.764825] ACPI: PCI Interrupt Link [GSID] (IRQs *19) | |
[ 1.765810] ACPI: PCI Interrupt Link [GSIE] (IRQs *20) | |
[ 1.766296] ACPI: PCI Interrupt Link [GSIF] (IRQs *21) | |
[ 1.766767] ACPI: PCI Interrupt Link [GSIG] (IRQs *22) | |
[ 1.767614] ACPI: PCI Interrupt Link [GSIH] (IRQs *23) | |
[ 1.773221] iommu: Default domain type: Translated | |
[ 1.777827] pci 0000:00:01.0: vgaarb: setting as boot VGA device | |
[ 1.778284] pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none | |
[ 1.778753] pci 0000:00:01.0: vgaarb: bridge control possible | |
[ 1.779638] vgaarb: loaded | |
[ 1.783097] SCSI subsystem initialized | |
[ 1.785955] ACPI: bus type USB registered | |
[ 1.787008] usbcore: registered new interface driver usbfs | |
[ 1.787867] usbcore: registered new interface driver hub | |
[ 1.788932] usbcore: registered new device driver usb | |
[ 1.790397] pps_core: LinuxPPS API ver. 1 registered | |
[ 1.790627] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <[email protected]> | |
[ 1.791303] PTP clock support registered | |
[ 1.793725] EDAC MC: Ver: 3.0.0 | |
[ 1.797905] PCI: Using ACPI for IRQ routing | |
[ 1.820583] NetLabel: Initializing | |
[ 1.820583] NetLabel: domain hash size = 128 | |
[ 1.820583] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO | |
[ 1.821829] NetLabel: unlabeled traffic allowed by default | |
[ 1.829677] clocksource: Switched to clocksource refined-jiffies | |
[ 1.967326] *** VALIDATE bpf *** | |
[ 1.969384] VFS: Disk quotas dquot_6.6.0 | |
[ 1.969920] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) | |
[ 1.972235] *** VALIDATE ramfs *** | |
[ 1.972562] *** VALIDATE hugetlbfs *** | |
[ 1.974800] pnp: PnP ACPI init | |
[ 1.980331] system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved | |
[ 1.983087] pnp: PnP ACPI: found 5 devices | |
[ 2.002557] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns | |
[ 2.002557] clocksource: Switched to clocksource acpi_pm | |
[ 2.002557] pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 | |
[ 2.002557] pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 | |
[ 2.002557] pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 | |
[ 2.006995] pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 | |
[ 2.014770] pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 | |
[ 2.015126] pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 | |
[ 2.015512] pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size1000 | |
[ 2.042525] pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] | |
[ 2.042969] pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] | |
[ 2.043223] pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] | |
[ 2.043468] pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] | |
[ 2.044150] pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] | |
[ 2.044404] pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] | |
[ 2.045416] pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] | |
[ 2.046929] pci 0000:00:02.0: PCI bridge to [bus 01] | |
[ 2.053457] pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] | |
[ 2.057205] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] | |
[ 2.059131] pci 0000:00:02.0: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] | |
[ 2.061736] pci 0000:00:02.1: PCI bridge to [bus 02] | |
[ 2.063849] pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] | |
[ 2.066161] pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] | |
[ 2.072015] pci 0000:00:02.1: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] | |
[ 2.076185] pci 0000:00:02.2: PCI bridge to [bus 03] | |
[ 2.076465] pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] | |
[ 2.078959] pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] | |
[ 2.080599] pci 0000:00:02.2: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] | |
[ 2.084032] pci 0000:00:02.3: PCI bridge to [bus 04] | |
[ 2.084318] pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] | |
[ 2.087977] pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] | |
[ 2.089981] pci 0000:00:02.3: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] | |
[ 2.092431] pci 0000:00:02.4: PCI bridge to [bus 05] | |
[ 2.093025] pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] | |
[ 2.095009] pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] | |
[ 2.096629] pci 0000:00:02.4: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] | |
[ 2.100137] pci 0000:00:02.5: PCI bridge to [bus 06] | |
[ 2.100447] pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] | |
[ 2.102288] pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] | |
[ 2.104069] pci 0000:00:02.5: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] | |
[ 2.106524] pci 0000:00:02.6: PCI bridge to [bus 07] | |
[ 2.107205] pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] | |
[ 2.109424] pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] | |
[ 2.112461] pci 0000:00:02.6: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] | |
[ 2.115121] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] | |
[ 2.115461] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] | |
[ 2.116133] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] | |
[ 2.116405] pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] | |
[ 2.116956] pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] | |
[ 2.117512] pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] | |
[ 2.118068] pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] | |
[ 2.118522] pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] | |
[ 2.118985] pci_bus 0000:01: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] | |
[ 2.119486] pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] | |
[ 2.120145] pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] | |
[ 2.120426] pci_bus 0000:02: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] | |
[ 2.121104] pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] | |
[ 2.121511] pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] | |
[ 2.122493] pci_bus 0000:03: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] | |
[ 2.123214] pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] | |
[ 2.123457] pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] | |
[ 2.124040] pci_bus 0000:04: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] | |
[ 2.124910] pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] | |
[ 2.125350] pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] | |
[ 2.125779] pci_bus 0000:05: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] | |
[ 2.126370] pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] | |
[ 2.126730] pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] | |
[ 2.127167] pci_bus 0000:06: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] | |
[ 2.127680] pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] | |
[ 2.128061] pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] | |
[ 2.128502] pci_bus 0000:07: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] | |
[ 2.131196] NET: Registered protocol family 2 | |
[ 2.146990] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) | |
[ 2.148054] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) | |
[ 2.149405] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) | |
[ 2.151881] TCP: Hash tables configured (established 16384 bind 16384) | |
[ 2.155134] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) | |
[ 2.156342] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) | |
[ 2.159870] NET: Registered protocol family 1 | |
[ 2.160780] NET: Registered protocol family 44 | |
[ 2.161983] pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] | |
[ 2.163097] PCI: CLS 0 bytes, default 64 | |
[ 2.169352] Trying to unpack rootfs image as initramfs... | |
[ 4.542382] Freeing initrd memory: 24888K | |
[ 4.553360] Initialise system trusted keyrings | |
[ 4.556745] Key type blacklist registered | |
[ 4.570830] workingset: timestamp_bits=36 max_order=19 bucket_order=0 | |
[ 4.590533] zbud: loaded | |
[ 4.607530] Platform Keyring initialized | |
[ 4.779329] NET: Registered protocol family 38 | |
[ 4.780051] Key type asymmetric registered | |
[ 4.780361] Asymmetric key parser 'x509' registered | |
[ 4.781031] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) | |
[ 4.783074] io scheduler mq-deadline registered | |
[ 4.783377] io scheduler kyber registered | |
[ 4.784312] io scheduler bfq registered | |
[ 4.790248] atomic64_test: passed for x86-64 platform with CX8 and with SSE | |
[ 4.799221] PCI Interrupt Link [GSIG] enabled at IRQ 22 | |
[ 4.810033] pcieport 0000:00:02.0: PME: Signaling with IRQ 24 | |
[ 4.814322] pcieport 0000:00:02.0: AER: enabled with IRQ 24 | |
[ 4.815530] pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.828382] pcieport 0000:00:02.1: PME: Signaling with IRQ 25 | |
[ 4.833004] pcieport 0000:00:02.1: AER: enabled with IRQ 25 | |
[ 4.833567] pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.840281] pcieport 0000:00:02.2: PME: Signaling with IRQ 26 | |
[ 4.842580] pcieport 0000:00:02.2: AER: enabled with IRQ 26 | |
[ 4.842747] pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.851160] pcieport 0000:00:02.3: PME: Signaling with IRQ 27 | |
[ 4.852792] pcieport 0000:00:02.3: AER: enabled with IRQ 27 | |
[ 4.853347] pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.861147] pcieport 0000:00:02.4: PME: Signaling with IRQ 28 | |
[ 4.862527] pcieport 0000:00:02.4: AER: enabled with IRQ 28 | |
[ 4.862737] pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.870268] pcieport 0000:00:02.5: PME: Signaling with IRQ 29 | |
[ 4.872779] pcieport 0000:00:02.5: AER: enabled with IRQ 29 | |
[ 4.873369] pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.882007] pcieport 0000:00:02.6: PME: Signaling with IRQ 30 | |
[ 4.883826] pcieport 0000:00:02.6: AER: enabled with IRQ 30 | |
[ 4.884780] pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- LLActRep+ | |
[ 4.888042] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 | |
[ 4.892086] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 | |
[ 4.898164] ACPI: Power Button [PWRF] | |
[ 4.935197] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled | |
[ 4.938058] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A | |
[ 4.963893] Non-volatile memory driver v1.3 | |
[ 4.964804] Linux agpgart interface v0.103 | |
[ 4.976341] PCI Interrupt Link [GSIA] enabled at IRQ 16 | |
[ 4.983075] ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode | |
[ 4.983620] ahci 0000:00:1f.2: flags: 64bit ncq only | |
[ 4.996057] scsi host0: ahci | |
[ 5.002047] scsi host1: ahci | |
[ 5.004075] scsi host2: ahci | |
[ 5.005918] scsi host3: ahci | |
[ 5.007537] scsi host4: ahci | |
[ 5.009383] scsi host5: ahci | |
[ 5.010769] ata1: SATA max UDMA/133 abar m4096@0xfea18000 port 0xfea18100 irq 31 | |
[ 5.011347] ata2: SATA max UDMA/133 abar m4096@0xfea18000 port 0xfea18180 irq 31 | |
[ 5.011738] ata3: SATA max UDMA/133 abar m4096@0xfea18000 port 0xfea18200 irq 31 | |
[ 5.012176] ata4: SATA max UDMA/133 abar m4096@0xfea18000 port 0xfea18280 irq 31 | |
[ 5.012555] ata5: SATA max UDMA/133 abar m4096@0xfea18000 port 0xfea18300 irq 31 | |
[ 5.019811] ata6: SATA max UDMA/133 abar m4096@0xfea18000 port 0xfea18380 irq 31 | |
[ 5.034078] libphy: Fixed MDIO Bus: probed | |
[ 5.037211] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver | |
[ 5.037633] ehci-pci: EHCI PCI platform driver | |
[ 5.039214] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver | |
[ 5.039723] ohci-pci: OHCI PCI platform driver | |
[ 5.039939] uhci_hcd: USB Universal Host Controller Interface driver | |
[ 5.041973] usbcore: registered new interface driver usbserial_generic | |
[ 5.042729] usbserial: USB Serial support registered for generic | |
[ 5.044602] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 | |
[ 5.054848] serio: i8042 KBD port at 0x60,0x64 irq 1 | |
[ 5.055883] serio: i8042 AUX port at 0x60,0x64 irq 12 | |
[ 5.060250] mousedev: PS/2 mouse device common for all mice | |
[ 5.065959] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 | |
[ 5.068818] rtc_cmos 00:03: RTC can wake from S4 | |
[ 5.081750] rtc_cmos 00:03: registered as rtc0 | |
[ 5.082339] rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram | |
[ 5.083993] device-mapper: uevent: version 1.0.3 | |
[ 5.086593] device-mapper: ioctl: 4.42.0-ioctl (2020-02-27) initialised: [email protected] | |
[ 5.091527] hid: raw HID events driver (C) Jiri Kosina | |
[ 5.093060] usbcore: registered new interface driver usbhid | |
[ 5.093480] usbhid: USB HID core driver | |
[ 5.094910] drop_monitor: Initializing network drop monitor service | |
[ 5.097096] Initializing XFRM netlink socket | |
[ 5.099219] NET: Registered protocol family 10 | |
[ 5.347547] ata2: SATA link down (SStatus 0 SControl 300) | |
[ 5.351228] ata1: SATA link down (SStatus 0 SControl 300) | |
[ 5.351564] ata6: SATA link down (SStatus 0 SControl 300) | |
[ 5.352244] ata5: SATA link down (SStatus 0 SControl 300) | |
[ 5.352945] ata4: SATA link down (SStatus 0 SControl 300) | |
[ 5.353352] ata3: SATA link down (SStatus 0 SControl 300) | |
[ 5.425106] Segment Routing with IPv6 | |
[ 5.426313] mip6: Mobile IPv6 | |
[ 5.426785] NET: Registered protocol family 17 | |
[ 5.435711] RAS: Correctable Errors collector initialized. | |
[ 5.436999] IPI shorthand broadcast: enabled | |
[ 5.437778] SSE version of gcm_enc/dec engaged. | |
[ 5.940596] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 | |
[ 7.826578] registered taskstats version 1 | |
[ 7.827736] Loading compiled-in X.509 certificates | |
[ 8.039529] Loaded X.509 cert 'Fedora kernel signing key: 0d97d514d1651893ca89cc557d3830fffcdfaba2' | |
[ 8.039630] zswap: loaded using pool lzo/zbud | |
[ 8.044752] Key type ._fscrypt registered | |
[ 8.044752] Key type .fscrypt registered | |
[ 8.044752] Key type fscrypt-provisioning registered | |
[ 8.514128] Key type big_key registered | |
[ 8.738185] Key type encrypted registered | |
[ 8.739272] ima: No TPM chip found, activating TPM-bypass! | |
[ 8.739925] ima: Allocated hash algorithm: sha256 | |
[ 8.742586] ima: No architecture policies found | |
[ 8.748284] PM: Magic number: 10:769:304 | |
[ 8.749048] tty tty33: hash matches | |
[ 8.750310] rtc_cmos 00:03: setting system clock to 2022-06-20T00:19:08 UTC (1655684348) | |
[ 8.752066] Unstable clock detected, switching default tracing clock to "global" | |
[ 8.752066] If you want to keep using the local clock, then add: | |
[ 8.752066] "trace_clock=local" | |
[ 8.752066] on the kernel command line | |
[ 8.797816] Freeing unused decrypted memory: 2040K | |
[ 8.865830] Freeing unused kernel image (initmem) memory: 2452K | |
[ 8.865830] Write protecting the kernel read-only data: 22528k | |
[ 8.873768] Freeing unused kernel image (text/rodata gap) memory: 2044K | |
[ 8.876135] Freeing unused kernel image (rodata/data gap) memory: 1276K | |
[ 8.998234] x86/mm: Checked W+X mappings: passed, no W+X pages found. | |
[ 8.999174] rodata_test: all tests were successful | |
[ 8.999545] Run /init as init process | |
[ 9.286619] systemd[1]: systemd v245.4-1.fc32 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified) | |
[ 9.290468] systemd[1]: Detected virtualization qemu. | |
[ 9.291337] systemd[1]: Detected architecture x86-64. | |
[ 9.291927] systemd[1]: Running in initial RAM disk. | |
Welcome to [0;34mFedora 32 (Cloud Edition) dracut-050-26.git20200316.fc32 (Initramfs)[0m! | |
[ 9.303064] systemd[1]: No hostname configured. | |
[ 9.304165] systemd[1]: Set hostname to <localhost>. | |
[ 9.306222] systemd[1]: Initializing machine ID from random generator. | |
[ 12.443824] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. | |
[[0;32m OK [0m] Started [0;1;39mDispatch Password …ts to Console Directory Watch[0m. | |
[ 12.458862] systemd[1]: Reached target Local Encrypted Volumes. | |
[[0;32m OK [0m] Reached target [0;1;39mLocal Encrypted Volumes[0m. | |
[ 12.466188] systemd[1]: Reached target Local File Systems. | |
[[0;32m OK [0m] Reached target [0;1;39mLocal File Systems[0m. | |
[ 12.469030] systemd[1]: Reached target Paths. | |
[[0;32m OK [0m] Reached target [0;1;39mPaths[0m. | |
[ 12.477579] systemd[1]: Reached target Slices. | |
[[0;32m OK [0m] Reached target [0;1;39mSlices[0m. | |
[ 12.479424] systemd[1]: Reached target Swap. | |
[[0;32m OK [0m] Reached target [0;1;39mSwap[0m. | |
[ 12.482183] systemd[1]: Reached target Timers. | |
[[0;32m OK [0m] Reached target [0;1;39mTimers[0m. | |
[ 12.497348] systemd[1]: Listening on Journal Audit Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mJournal Audit Socket[0m. | |
[ 12.502584] systemd[1]: Listening on Journal Socket (/dev/log). | |
[[0;32m OK [0m] Listening on [0;1;39mJournal Socket (/dev/log)[0m. | |
[ 12.507332] systemd[1]: Listening on Journal Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mJournal Socket[0m. | |
[ 12.512609] systemd[1]: Listening on udev Control Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mudev Control Socket[0m. | |
[ 12.516363] systemd[1]: Listening on udev Kernel Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mudev Kernel Socket[0m. | |
[ 12.519117] systemd[1]: Reached target Sockets. | |
[[0;32m OK [0m] Reached target [0;1;39mSockets[0m. | |
[ 12.586364] systemd[1]: Starting Create list of static device nodes for the current kernel... | |
Starting [0;1;39mCreate list of st…odes for the current kernel[0m... | |
[ 12.698054] systemd[1]: Starting Journal Service... | |
Starting [0;1;39mJournal Service[0m... | |
[ 12.698054] systemd[1]: Condition check resulted in Load Kernel Modules being skipped. | |
[ 12.846193] systemd[1]: Starting Apply Kernel Variables... | |
Starting [0;1;39mApply Kernel Variables[0m... | |
[ 13.004523] systemd[1]: Starting Setup Virtual Console... | |
Starting [0;1;39mSetup Virtual Console[0m... | |
[ 13.149762] systemd[1]: Finished Create list of static device nodes for the current kernel. | |
[[0;32m OK [0m] Finished [0;1;39mCreate list of st… nodes for the current kernel[0m. | |
[ 13.276304] systemd[1]: Starting Create Static Device Nodes in /dev... | |
Starting [0;1;39mCreate Static Device Nodes in /dev[0m... | |
[ 13.892550] systemd[1]: Finished Setup Virtual Console. | |
[[0;32m OK [0m] Finished [0;1;39mSetup Virtual Console[0m. | |
[ 13.935556] systemd[1]: Finished Apply Kernel Variables. | |
[[0;32m OK [0m] Finished [0;1;39mApply Kernel Variables[0m. | |
[ 13.963808] systemd[1]: Condition check resulted in dracut ask for additional cmdline parameters being skipped. | |
[ 14.040974] systemd[1]: Starting dracut cmdline hook... | |
Starting [0;1;39mdracut cmdline hook[0m... | |
[ 14.282539] systemd[1]: Finished Create Static Device Nodes in /dev. | |
[[0;32m OK [0m] Finished [0;1;39mCreate Static Device Nodes in /dev[0m. | |
[ 16.612703] systemd[1]: Finished dracut cmdline hook. | |
[[0;32m OK [0m] Finished [0;1;39mdracut cmdline hook[0m. | |
[ 16.677752] systemd[1]: Starting dracut pre-udev hook... | |
Starting [0;1;39mdracut pre-udev hook[0m... | |
[ 18.238582] systemd[1]: Finished dracut pre-udev hook. | |
[[0;32m OK [0m] Finished [0;1;39mdracut pre-udev hook[0m. | |
[ 18.302243] systemd[1]: Starting udev Kernel Device Manager... | |
Starting [0;1;39mudev Kernel Device Manager[0m... | |
[[0;1;31m*[0m[0;31m* [0m] (1 of 3) A start job is running for…-a6ba-f377f0c90217 (7s / no limit) | |
M | |
[K[[0;31m*[0;1;31m*[0m[0;31m* [0m] (1 of 3) A start job is running for…-a6ba-f377f0c90217 (8s / no limit) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (2 of 3) A start job is running for Journal Service (8s / 1min 30s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (2 of 3) A start job is running for Journal Service (9s / 1min 30s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m*[0m] (2 of 3) A start job is running for Journal Service (9s / 1min 30s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m] (3 of 3) A start job is running for…el Device Manager (10s / 1min 35s) | |
M | |
[K[ [0;31m*[0m] (3 of 3) A start job is running for…el Device Manager (10s / 1min 35s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m] (3 of 3) A start job is running for…el Device Manager (11s / 1min 35s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m*[0m] (1 of 3) A start job is running for…a6ba-f377f0c90217 (11s / no limit) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (1 of 3) A start job is running for…a6ba-f377f0c90217 (12s / no limit) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (1 of 3) A start job is running for…a6ba-f377f0c90217 (12s / no limit) | |
[ 25.713599] systemd[1]: Started Journal Service. | |
M | |
[K[[0;32m OK [0m] Started [0;1;39mJournal Service[0m. | |
[K[ 25.731118] audit: type=1130 audit(1655684365.480:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Starting [0;1;39mCreate Volatile Files and Directories[0m... | |
[[0;32m OK [0m] Finished [0;1;39mCreate Volatile Files and Directories[0m. | |
[ 26.590432] audit: type=1130 audit(1655684366.339:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;32m OK [0m] Started [0;1;39mudev Kernel Device Manager[0m. | |
[ 27.724179] audit: type=1130 audit(1655684367.473:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Starting [0;1;39mudev Coldplug all Devices[0m... | |
[[0;31m*[0;1;31m*[0m[0;31m* [0m] (1 of 2) A start job is running for…a6ba-f377f0c90217 (17s / no limit) | |
M | |
[K[[0;1;31m*[0m[0;31m* [0m] (1 of 2) A start job is running for…a6ba-f377f0c90217 (17s / no limit) | |
M | |
[K[[0m[0;31m* [0m] (1 of 2) A start job is running for…a6ba-f377f0c90217 (18s / no limit) | |
M | |
[K[[0;32m OK [0m] Finished [0;1;39mudev Coldplug all Devices[0m. | |
[K[ 30.902222] audit: type=1130 audit(1655684370.651:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;32m OK [0m] Reached target [0;1;39mSystem Initialization[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mBasic System[0m. | |
Starting [0;1;39mdracut initqueue hook[0m... | |
Mounting [0;1;39mKernel Configuration File System[0m... | |
[[0;32m OK [0m] Mounted [0;1;39mKernel Configuration File System[0m. | |
[ 33.488601] scsi host6: Virtio SCSI HBA | |
[ 34.035854] virtio_blk virtio3: [vda] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) | |
[ 34.216126] vda: vda1 | |
[ 34.402440] virtio_blk virtio4: [vdb] 2048 512-byte logical blocks (1.05 MB/1.00 MiB) | |
[[0;32m OK [0m] Found device [0;1;39m/dev/disk/by-…4-3bbb-40b2-a6ba-f377f0c90217[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mInitrd Root Device[0m. | |
[[0;32m OK [0m] Finished [0;1;39mdracut initqueue hook[0m. | |
[ 39.252589] audit: type=1130 audit(1655684379.001:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;32m OK [0m] Reached target [0;1;39mRemote File Systems (Pre)[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mRemote File Systems[0m. | |
Starting [0;1;39mFile System Check…3bbb-40b2-a6ba-f377f0c90217[0m... | |
[[0;32m OK [0m] Finished [0;1;39mFile System Check…4-3bbb-40b2-a6ba-f377f0c90217[0m. | |
[ 39.897307] audit: type=1130 audit(1655684379.644:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Mounting [0;1;39m/sysroot[0m... | |
[ 40.195422] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null) | |
[[0;32m OK [0m] Mounted [0;1;39m/sysroot[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mInitrd Root File System[0m. | |
Starting [0;1;39mReload Configuration from the Real Root[0m... | |
[ 40.617082] audit: type=1334 audit(1655684380.366:8): prog-id=5 op=UNLOAD | |
[ 40.623104] audit: type=1334 audit(1655684380.371:9): prog-id=4 op=UNLOAD | |
[ 40.624866] audit: type=1334 audit(1655684380.373:10): prog-id=3 op=UNLOAD | |
[ 40.664188] audit: type=1334 audit(1655684380.412:11): prog-id=7 op=UNLOAD | |
[ 40.675901] audit: type=1334 audit(1655684380.424:12): prog-id=6 op=UNLOAD | |
[ 44.358489] audit: type=1334 audit(1655684384.107:13): prog-id=8 op=LOAD | |
[ 44.371157] audit: type=1334 audit(1655684384.119:14): prog-id=9 op=LOAD | |
[ 44.390667] audit: type=1334 audit(1655684384.139:15): prog-id=10 op=LOAD | |
[ 44.399038] audit: type=1334 audit(1655684384.148:16): prog-id=11 op=LOAD | |
[ 44.403304] audit: type=1334 audit(1655684384.152:17): prog-id=12 op=LOAD | |
[[0;32m OK [0m] Finished [0;1;39mReload Configuration from the Real Root[0m. | |
[ 45.039677] audit: type=1130 audit(1655684384.788:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 45.045343] audit: type=1131 audit(1655684384.794:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;32m OK [0m] Reached target [0;1;39mInitrd File Systems[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mInitrd Default Target[0m. | |
Starting [0;1;39mdracut mount hook[0m... | |
[[0;32m OK [0m] Finished [0;1;39mdracut mount hook[0m. | |
[ 45.614919] audit: type=1130 audit(1655684385.363:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Starting [0;1;39mCleaning Up and Shutting Down Daemons[0m... | |
[[0;32m OK [0m] Stopped target [0;1;39mInitrd Default Target[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mBasic System[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mInitrd Root Device[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mPaths[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mRemote File Systems[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mRemote File Systems (Pre)[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mSlices[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mSockets[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mSystem Initialization[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mLocal Encrypted Volumes[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mDispatch Password …ts to Console Directory Watch[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mSwap[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mTimers[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mdracut mount hook[0m. | |
[ 46.113349] audit: type=1131 audit(1655684385.862:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;32m OK [0m] Stopped [0;1;39mdracut initqueue hook[0m. | |
[ 46.143524] audit: type=1131 audit(1655684385.892:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;32m OK [0m] Stopped [0;1;39mApply Kernel Variables[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mCreate Volatile Files and Directories[0m. | |
[[0;32m OK [0m] Stopped target [0;1;39mLocal File Systems[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mudev Coldplug all Devices[0m. | |
Stopping [0;1;39mudev Kernel Device Manager[0m... | |
[[0;32m OK [0m] Stopped [0;1;39mSetup Virtual Console[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mudev Kernel Device Manager[0m. | |
[[0;32m OK [0m] Closed [0;1;39mudev Control Socket[0m. | |
[[0;32m OK [0m] Closed [0;1;39mudev Kernel Socket[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mdracut pre-udev hook[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mdracut cmdline hook[0m. | |
Starting [0;1;39mCleanup udevd DB[0m... | |
[[0;32m OK [0m] Stopped [0;1;39mCreate Static Device Nodes in /dev[0m. | |
[[0;32m OK [0m] Stopped [0;1;39mCreate list of sta… nodes for the current kernel[0m. | |
[[0;32m OK [0m] Finished [0;1;39mCleaning Up and Shutting Down Daemons[0m. | |
[[0;32m OK [0m] Finished [0;1;39mCleanup udevd DB[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mSwitch Root[0m. | |
Starting [0;1;39mSwitch Root[0m... | |
[ 47.111377] systemd-journald[293]: Received SIGTERM from PID 1 (systemd). | |
[ 51.412571] SELinux: Permission watch in class filesystem not defined in policy. | |
[ 51.413259] SELinux: Permission watch in class file not defined in policy. | |
[ 51.413592] SELinux: Permission watch_mount in class file not defined in policy. | |
[ 51.413911] SELinux: Permission watch_sb in class file not defined in policy. | |
[ 51.414344] SELinux: Permission watch_with_perm in class file not defined in policy. | |
[ 51.427091] SELinux: Permission watch_reads in class file not defined in policy. | |
[ 51.427438] SELinux: Permission watch in class dir not defined in policy. | |
[ 51.427692] SELinux: Permission watch_mount in class dir not defined in policy. | |
[ 51.429703] SELinux: Permission watch_sb in class dir not defined in policy. | |
[ 51.430091] SELinux: Permission watch_with_perm in class dir not defined in policy. | |
[ 51.430543] SELinux: Permission watch_reads in class dir not defined in policy. | |
[ 51.430653] SELinux: Permission watch in class lnk_file not defined in policy. | |
[ 51.431603] SELinux: Permission watch_mount in class lnk_file not defined in policy. | |
[ 51.432072] SELinux: Permission watch_sb in class lnk_file not defined in policy. | |
[ 51.432493] SELinux: Permission watch_with_perm in class lnk_file not defined in policy. | |
[ 51.432872] SELinux: Permission watch_reads in class lnk_file not defined in policy. | |
[ 51.433405] SELinux: Permission watch in class chr_file not defined in policy. | |
[ 51.433856] SELinux: Permission watch_mount in class chr_file not defined in policy. | |
[ 51.434932] SELinux: Permission watch_sb in class chr_file not defined in policy. | |
[ 51.435261] SELinux: Permission watch_with_perm in class chr_file not defined in policy. | |
[ 51.435616] SELinux: Permission watch_reads in class chr_file not defined in policy. | |
[ 51.436677] SELinux: Permission watch in class blk_file not defined in policy. | |
[ 51.437639] SELinux: Permission watch_mount in class blk_file not defined in policy. | |
[ 51.437639] SELinux: Permission watch_sb in class blk_file not defined in policy. | |
[ 51.439279] SELinux: Permission watch_with_perm in class blk_file not defined in policy. | |
[ 51.439901] SELinux: Permission watch_reads in class blk_file not defined in policy. | |
[ 51.440285] SELinux: Permission watch in class sock_file not defined in policy. | |
[ 51.442313] SELinux: Permission watch_mount in class sock_file not defined in policy. | |
[ 51.442788] SELinux: Permission watch_sb in class sock_file not defined in policy. | |
[ 51.443114] SELinux: Permission watch_with_perm in class sock_file not defined in policy. | |
[ 51.443447] SELinux: Permission watch_reads in class sock_file not defined in policy. | |
[ 51.449286] SELinux: Permission watch in class fifo_file not defined in policy. | |
[ 51.449860] SELinux: Permission watch_mount in class fifo_file not defined in policy. | |
[ 51.450273] SELinux: Permission watch_sb in class fifo_file not defined in policy. | |
[ 51.450823] SELinux: Permission watch_with_perm in class fifo_file not defined in policy. | |
[ 51.451192] SELinux: Permission watch_reads in class fifo_file not defined in policy. | |
[ 51.453112] SELinux: Class perf_event not defined in policy. | |
[ 51.453493] SELinux: Class lockdown not defined in policy. | |
[ 51.454050] SELinux: the above unknown classes and permissions will be allowed | |
[ 51.456135] SELinux: policy capability network_peer_controls=1 | |
[ 51.456441] SELinux: policy capability open_perms=1 | |
[ 51.456701] SELinux: policy capability extended_socket_class=1 | |
[ 51.457185] SELinux: policy capability always_check_network=0 | |
[ 51.457425] SELinux: policy capability cgroup_seclabel=1 | |
[ 51.457856] SELinux: policy capability nnp_nosuid_transition=1 | |
[ 51.579868] kauditd_printk_skb: 19 callbacks suppressed | |
[ 51.579868] audit: type=1403 audit(1655684391.328:42): auid=4294967295 ses=4294967295 lsm=selinux res=1 | |
[ 51.626430] systemd[1]: Successfully loaded SELinux policy in 3.613504s. | |
[ 52.332011] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 430.677ms. | |
[ 52.383481] systemd[1]: systemd v245.4-1.fc32 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified) | |
[ 52.387052] systemd[1]: Detected virtualization qemu. | |
[ 52.388075] systemd[1]: Detected architecture x86-64. | |
Welcome to [0;34mFedora 32 (Cloud Edition)[0m! | |
[ 52.405181] systemd[1]: Set hostname to <localhost.localdomain>. | |
[ 52.417556] systemd[1]: Initializing machine ID from random generator. | |
[ 52.420931] systemd[1]: Installed transient /etc/machine-id file. | |
[ 52.513109] audit: type=1334 audit(1655684392.262:43): prog-id=13 op=LOAD | |
[ 52.514849] audit: type=1334 audit(1655684392.263:44): prog-id=13 op=UNLOAD | |
[ 52.517277] audit: type=1334 audit(1655684392.266:45): prog-id=14 op=LOAD | |
[ 52.518959] audit: type=1334 audit(1655684392.267:46): prog-id=14 op=UNLOAD | |
[ 58.685476] systemd[1]: /usr/lib/systemd/system/sssd.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/sssd.pid → /run/sssd.pid; please update the unit file accordingly. | |
[ 59.223216] systemd[1]: /usr/lib/systemd/system/sssd-kcm.socket:7: ListenStream= references a path below legacy directory /var/run/, updating /var/run/.heim_org.h5l.kcm-socket → /run/.heim_org.h5l.kcm-socket; please update the unit file accordingly. | |
[ 59.995146] audit: type=1334 audit(1655684399.744:47): prog-id=15 op=LOAD | |
[ 60.009201] audit: type=1334 audit(1655684399.758:48): prog-id=16 op=LOAD | |
[ 60.011298] audit: type=1334 audit(1655684399.760:49): prog-id=17 op=LOAD | |
[ 60.062073] audit: type=1334 audit(1655684399.801:50): prog-id=15 op=UNLOAD | |
[ 60.066034] audit: type=1131 audit(1655684399.814:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 60.086563] systemd[1]: initrd-switch-root.service: Succeeded. | |
[ 60.090604] systemd[1]: Stopped Switch Root. | |
[[0;32m OK [0m] Stopped [0;1;39mSwitch Root[0m. | |
[ 60.099805] audit: type=1130 audit(1655684399.848:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 60.100693] audit: type=1131 audit(1655684399.849:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 60.109386] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. | |
[ 60.117509] systemd[1]: Created slice system-getty.slice. | |
[[0;32m OK [0m] Created slice [0;1;39msystem-getty.slice[0m. | |
[ 60.123920] systemd[1]: Created slice system-modprobe.slice. | |
[[0;32m OK [0m] Created slice [0;1;39msystem-modprobe.slice[0m. | |
[ 60.129385] systemd[1]: Created slice system-serial\x2dgetty.slice. | |
[[0;32m OK [0m] Created slice [0;1;39msystem-serial\x2dgetty.slice[0m. | |
[ 60.134938] systemd[1]: Created slice system-sshd\x2dkeygen.slice. | |
[[0;32m OK [0m] Created slice [0;1;39msystem-sshd\x2dkeygen.slice[0m. | |
[ 60.143397] systemd[1]: Created slice User and Session Slice. | |
[[0;32m OK [0m] Created slice [0;1;39mUser and Session Slice[0m. | |
[ 60.149282] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. | |
[[0;32m OK [0m] Started [0;1;39mDispatch Password …ts to Console Directory Watch[0m. | |
[ 60.156143] systemd[1]: Started Forward Password Requests to Wall Directory Watch. | |
[[0;32m OK [0m] Started [0;1;39mForward Password R…uests to Wall Directory Watch[0m. | |
[ 60.169664] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. | |
[[0;32m OK [0m] Set up automount [0;1;39mArbitrary…s File System Automount Point[0m. | |
[ 60.172255] systemd[1]: Reached target Local Encrypted Volumes. | |
[[0;32m OK [0m] Reached target [0;1;39mLocal Encrypted Volumes[0m. | |
[ 60.174585] systemd[1]: Stopped target Switch Root. | |
[[0;32m OK [0m] Stopped target [0;1;39mSwitch Root[0m. | |
[ 60.177190] systemd[1]: Stopped target Initrd File Systems. | |
[[0;32m OK [0m] Stopped target [0;1;39mInitrd File Systems[0m. | |
[ 60.179072] systemd[1]: Stopped target Initrd Root File System. | |
[[0;32m OK [0m] Stopped target [0;1;39mInitrd Root File System[0m. | |
[ 60.181303] systemd[1]: Reached target Paths. | |
[[0;32m OK [0m] Reached target [0;1;39mPaths[0m. | |
[ 60.183237] systemd[1]: Reached target Remote File Systems. | |
[[0;32m OK [0m] Reached target [0;1;39mRemote File Systems[0m. | |
[ 60.185549] systemd[1]: Reached target Slices. | |
[[0;32m OK [0m] Reached target [0;1;39mSlices[0m. | |
[ 60.188017] systemd[1]: Reached target Swap. | |
[[0;32m OK [0m] Reached target [0;1;39mSwap[0m. | |
[ 60.229576] systemd[1]: Listening on Process Core Dump Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mProcess Core Dump Socket[0m. | |
[ 60.235388] systemd[1]: Listening on initctl Compatibility Named Pipe. | |
[[0;32m OK [0m] Listening on [0;1;39minitctl Compatibility Named Pipe[0m. | |
[ 60.249958] systemd[1]: Listening on udev Control Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mudev Control Socket[0m. | |
[ 60.256085] systemd[1]: Listening on udev Kernel Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mudev Kernel Socket[0m. | |
[ 60.264138] systemd[1]: Listening on User Database Manager Socket. | |
[[0;32m OK [0m] Listening on [0;1;39mUser Database Manager Socket[0m. | |
[ 60.288194] systemd[1]: Mounting Huge Pages File System... | |
Mounting [0;1;39mHuge Pages File System[0m... | |
[ 60.344667] systemd[1]: Mounting POSIX Message Queue File System... | |
Mounting [0;1;39mPOSIX Message Queue File System[0m... | |
[ 60.441022] systemd[1]: Mounting Kernel Debug File System... | |
Mounting [0;1;39mKernel Debug File System[0m... | |
[ 60.592650] systemd[1]: Mounting Kernel Trace File System... | |
Mounting [0;1;39mKernel Trace File System[0m... | |
[ 60.726841] systemd[1]: Starting Create list of static device nodes for the current kernel... | |
Starting [0;1;39mCreate list of st…odes for the current kernel[0m... | |
[ 60.996336] systemd[1]: Starting Load Kernel Module drm... | |
Starting [0;1;39mLoad Kernel Module drm[0m... | |
[ 61.105979] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped. | |
[ 61.116371] systemd[1]: Stopped Journal Service. | |
[[0;32m OK [0m] Stopped [0;1;39mJournal Service[0m. | |
[ 61.135271] audit: type=1130 audit(1655684400.884:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 61.142346] audit: type=1131 audit(1655684400.891:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 61.149591] systemd[1]: systemd-journald.service: Consumed 6.411s CPU time. | |
[ 61.239703] audit: type=1334 audit(1655684400.988:56): prog-id=18 op=LOAD | |
[ 61.407372] systemd[1]: Starting Journal Service... | |
Starting [0;1;39mJournal Service[0m... | |
[ 61.514430] systemd[1]: Condition check resulted in Load Kernel Modules being skipped. | |
[ 61.537665] systemd[1]: Condition check resulted in FUSE Control File System being skipped. | |
[ 61.654629] systemd[1]: Starting Remount Root and Kernel File Systems... | |
Starting [0;1;39mRemount Root and Kernel File Systems[0m... | |
[ 61.770073] systemd[1]: Starting Repartition Root Disk... | |
Starting [0;1;39mRepartition Root Disk[0m... | |
[ 61.933522] systemd[1]: Starting Apply Kernel Variables... | |
Starting [0;1;39mApply Kernel Variables[0m... | |
[ 62.158853] systemd[1]: Starting udev Coldplug all Devices... | |
Starting [0;1;39mudev Coldplug all Devices[0m... | |
[ 62.274640] systemd[1]: sysroot.mount: Succeeded. | |
[ 62.463312] systemd[1]: Mounted Huge Pages File System. | |
[[0;32m OK [0m] Mounted [0;1;39mHuge Pages File System[0m. | |
[ 62.523217] systemd[1]: Mounted POSIX Message Queue File System. | |
[[0;32m OK [0m] Mounted [0;1;39mPOSIX Message Queue File System[0m. | |
[ 62.570208] systemd[1]: Mounted Kernel Debug File System. | |
[[0;32m OK [0m] Mounted [0;1;39mKernel Debug File System[0m. | |
[ 62.623430] systemd[1]: Mounted Kernel Trace File System. | |
[[0;32m OK [0m] Mounted [0;1;39mKernel Trace File System[0m. | |
[ 62.673160] systemd[1]: Finished Create list of static device nodes for the current kernel. | |
[[0;32m OK [0m] Finished [0;1;39mCreate list of st… nodes for the current kernel[0m. | |
[ 62.750298] systemd[1]: [email protected]: Succeeded. | |
[ 62.825547] systemd[1]: Finished Load Kernel Module drm. | |
[[0;32m OK [0m] Finished [0;1;39mLoad Kernel Module drm[0m. | |
[ 63.196136] systemd[1]: Finished Repartition Root Disk. | |
[[0;32m OK [0m] Finished [0;1;39mRepartition Root Disk[0m. | |
[ 63.623615] systemd[1]: Finished Apply Kernel Variables. | |
[[0;32m OK [0m] Finished [0;1;39mApply Kernel Variables[0m. | |
[ 63.675053] EXT4-fs (vda1): re-mounted. Opts: (null) | |
[ 63.780488] systemd[1]: Finished Remount Root and Kernel File Systems. | |
[[0;32m OK [0m] Finished [0;1;39mRemount Root and Kernel File Systems[0m. | |
[ 63.800083] systemd[1]: Condition check resulted in First Boot Wizard being skipped. | |
[ 63.865295] systemd[1]: Starting Rebuild Hardware Database... | |
Starting [0;1;39mRebuild Hardware Database[0m... | |
[ 63.965791] systemd[1]: Starting Load/Save Random Seed... | |
Starting [0;1;39mLoad/Save Random Seed[0m... | |
[ 64.078676] systemd[1]: Starting Create System Users... | |
Starting [0;1;39mCreate System Users[0m... | |
[ 65.374536] systemd[1]: Finished Load/Save Random Seed. | |
[[0;32m OK [0m] Finished [0;1;39mLoad/Save Random Seed[0m. | |
[ 65.389273] kauditd_printk_skb: 10 callbacks suppressed | |
[ 65.389293] audit: type=1130 audit(1655684405.138:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 65.681487] systemd[1]: Finished Create System Users. | |
[[0;32m OK [0m] Finished [0;1;39mCreate System Users[0m. | |
[ 65.695950] audit: type=1130 audit(1655684405.444:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 65.811570] systemd[1]: Starting Create Static Device Nodes in /dev... | |
Starting [0;1;39mCreate Static Device Nodes in /dev[0m... | |
[[0m[0;31m* [0m] (1 of 4) A start job is running for… Hardware Database (7s / 1min 33s) | |
[ 68.034234] systemd[1]: Finished Create Static Device Nodes in /dev. | |
M | |
[K[[0;32m OK [0m] Finished [0;1;39mCreate Static Device Nodes in /dev[0m. | |
[K[ 68.042766] audit: type=1130 audit(1655684407.791:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 68.059114] systemd[1]: Reached target Local File Systems (Pre). | |
[[0;32m OK [0m] Reached target [0;1;39mLocal File Systems (Pre)[0m. | |
[ 68.072518] systemd[1]: Reached target Local File Systems. | |
[[0;32m OK [0m] Reached target [0;1;39mLocal File Systems[0m. | |
[ 68.198951] systemd[1]: Starting Restore /run/initramfs on shutdown... | |
Starting [0;1;39mRestore /run/initramfs on shutdown[0m... | |
[ 68.223491] systemd[1]: Condition check resulted in Import network configuration from initramfs being skipped. | |
[ 68.314356] systemd[1]: Starting Rebuild Dynamic Linker Cache... | |
Starting [0;1;39mRebuild Dynamic Linker Cache[0m... | |
[ 68.357516] systemd[1]: Condition check resulted in Mark the need to relabel after reboot being skipped. | |
[ 68.379277] systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped. | |
[ 68.545526] systemd[1]: Starting Commit a transient machine-id on disk... | |
Starting [0;1;39mCommit a transient machine-id on disk[0m... | |
[ 68.642512] systemd[1]: Finished Restore /run/initramfs on shutdown. | |
[[0;32m OK [0m] Finished [0;1;39mRestore /run/initramfs on shutdown[0m. | |
[ 68.652065] audit: type=1130 audit(1655684408.400:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dracut-shutdown comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 69.566924] systemd[1]: etc-machine\x2did.mount: Succeeded. | |
[ 69.621642] systemd[1]: Finished Commit a transient machine-id on disk. | |
[[0;32m OK [0m] Finished [0;1;39mCommit a transient machine-id on disk[0m. | |
[ 69.636569] audit: type=1130 audit(1655684409.385:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 69.926137] systemd[1]: Finished Rebuild Dynamic Linker Cache. | |
[[0;32m OK [0m] Finished [0;1;39mRebuild Dynamic Linker Cache[0m. | |
[ 69.939655] audit: type=1130 audit(1655684409.688:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[ 71.162282] systemd[1]: Finished udev Coldplug all Devices. | |
[[0;32m OK [0m] Finished [0;1;39mudev Coldplug all Devices[0m. | |
[ 71.174820] audit: type=1130 audit(1655684410.922:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
[[0;1;31m*[0m[0;31m* [0m] (1 of 2) A start job is running for…Hardware Database (13s / 1min 33s) | |
M | |
[K[[0;31m*[0;1;31m*[0m[0;31m* [0m] (1 of 2) A start job is running for…Hardware Database (13s / 1min 33s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (2 of 2) A start job is running for Journal Service (14s / 1min 31s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (2 of 2) A start job is running for Journal Service (14s / 1min 31s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m*[0m] (2 of 2) A start job is running for Journal Service (15s / 1min 31s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m] (1 of 2) A start job is running for…Hardware Database (15s / 1min 33s) | |
M | |
[K[ [0;31m*[0m] (1 of 2) A start job is running for…Hardware Database (16s / 1min 33s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m] (1 of 2) A start job is running for…Hardware Database (16s / 1min 33s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m*[0m] (2 of 2) A start job is running for Journal Service (17s / 1min 31s) | |
[ 77.364384] audit: type=1305 audit(1655684417.112:74): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:syslogd_t:s0 res=1 | |
[ 77.791443] systemd[1]: Started Journal Service. | |
M | |
[K[[0;32m OK [0m] Started [0;1;39mJournal Service[0m. | |
[K[ 77.809454] audit: type=1130 audit(1655684417.558:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Starting [0;1;39mFlush Journal to Persistent Storage[0m... | |
[ 78.684495] systemd-journald[507]: Received client request to flush runtime journal. | |
[[0;32m OK [0m] Finished [0;1;39mFlush Journal to Persistent Storage[0m. | |
[ 79.346328] audit: type=1130 audit(1655684419.094:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Starting [0;1;39mCreate Volatile Files and Directories[0m... | |
[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (2 of 2) A start job is running for…s and Directories (21s / no limit) | |
M | |
[K[[0;32m OK [0m] Finished [0;1;39mCreate Volatile Files and Directories[0m. | |
[K[ 81.644526] audit: type=1130 audit(1655684421.393:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' | |
Starting [0;1;39mSecurity Auditing Service[0m... | |
Starting [0;1;39mRebuild Journal Catalog[0m... | |
[ 82.823657] audit: type=1305 audit(1655684422.572:78): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 | |
[[0;32m OK [0m] Finished [0;1;39mRebuild Journal Catalog[0m. | |
[ [0;31m*[0;1;31m*[0m[0;31m* [0m] (2 of 2) A start job is running for… Auditing Service (25s / 1min 52s) | |
M | |
[K[[0;31m*[0;1;31m*[0m[0;31m* [0m] (1 of 2) A start job is running for…Hardware Database (26s / 1min 33s) | |
M | |
[K[[0;32m OK [0m] Started [0;1;39mSecurity Auditing Service[0m. | |
[K Starting [0;1;39mUpdate UTMP about System Boot/Shutdown[0m... | |
[[0;32m OK [0m] Finished [0;1;39mRebuild Hardware Database[0m. | |
[[0;32m OK [0m] Finished [0;1;39mUpdate UTMP about System Boot/Shutdown[0m. | |
Starting [0;1;39mudev Kernel Device Manager[0m... | |
Starting [0;1;39mUpdate is Completed[0m... | |
[[0;32m OK [0m] Finished [0;1;39mUpdate is Completed[0m. | |
[[0;1;31m*[0m[0;31m* [0m] A start job is running for udev Kernel Device Manager (29s / 1min 57s) | |
M | |
[K[[0m[0;31m* [0m] A start job is running for udev Kernel Device Manager (30s / 1min 57s) | |
M | |
[K[[0;1;31m*[0m[0;31m* [0m] A start job is running for udev Kernel Device Manager (30s / 1min 57s) | |
M | |
[K[[0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for udev Kernel Device Manager (31s / 1min 57s) | |
M | |
[K[ [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for udev Kernel Device Manager (31s / 1min 57s) | |
M | |
[K[[0;32m OK [0m] Started [0;1;39mudev Kernel Device Manager[0m. | |
[K[[0;32m OK [0m] Reached target [0;1;39mSystem Initialization[0m. | |
[[0;32m OK [0m] Started [0;1;39mdnf makecache --timer[0m. | |
[[0;32m OK [0m] Started [0;1;39mDiscard unused blocks once a week[0m. | |
[[0;32m OK [0m] Started [0;1;39mDaily Cleanup of Temporary Directories[0m. | |
[[0;32m OK [0m] Started [0;1;39mdaily update of the root trust anchor for DNSSEC[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mTimers[0m. | |
[[0;32m OK [0m] Listening on [0;1;39mD-Bus System Message Bus Socket[0m. | |
[[0;32m OK [0m] Listening on [0;1;39mSSSD Kerberos…ache Manager responder socket[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mSockets[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mBasic System[0m. | |
Starting [0;1;39mNTP client/server[0m... | |
Starting [0;1;39mInitial cloud-init job (pre-networking)[0m... | |
Starting [0;1;39mOpenSSH ecdsa Server Key Generation[0m... | |
Starting [0;1;39mOpenSSH ed25519 Server Key Generation[0m... | |
Starting [0;1;39mOpenSSH rsa Server Key Generation[0m... | |
Starting [0;1;39mSystem Security Services Daemon[0m... | |
Starting [0;1;39mHome Area Manager[0m... | |
[[0;32m OK [0m] Started [0;1;39mNTP client/server[0m. | |
[[0;32m OK [0m] Finished [0;1;39mOpenSSH ecdsa Server Key Generation[0m. | |
[[0;32m OK [0m] Finished [0;1;39mOpenSSH ed25519 Server Key Generation[0m. | |
[[0;32m OK [0m] Started [0;1;39mSystem Security Services Daemon[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mUser and Group Name Lookups[0m. | |
Starting [0;1;39mLogin Service[0m... | |
[[0;32m OK [0m] Finished [0;1;39mOpenSSH rsa Server Key Generation[0m. | |
[[0;32m OK [0m] Reached target [0;1;39msshd-keygen.target[0m. | |
Starting [0;1;39mD-Bus System Message Bus[0m... | |
[[0;32m OK [0m] Started [0;1;39mD-Bus System Message Bus[0m. | |
[ 155.350644] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt | |
[[0;32m OK [0m] Started [0;1;39mHome Area Manager[0m. | |
[[0;32m OK [0m] Started [0;1;39mLogin Service[0m. | |
[ 175.683511] kvm: Nested Virtualization enabled | |
[ 175.683703] kvm: Nested Paging enabled | |
[ 176.049319] Decoding supported only on Scalable MCA processors. | |
Starting [0;1;39mHostname Service[0m... | |
[ 183.084313] bochs-drm 0000:00:01.0: vgaarb: deactivate vga console | |
[ 183.134087] Console: switching to colour dummy device 80x25 | |
[ 183.206934] [drm] Found bochs VGA, ID 0xb0c0. | |
[ 183.207236] [drm] Framebuffer size 16384 kB @ 0xfb000000, mmio @ 0xfea10000. | |
[ 183.239507] [TTM] Zone kernel: Available graphics memory: 1013038 KiB | |
[ 183.239775] [TTM] Initializing pool allocator | |
[ 183.240872] [TTM] Initializing DMA pool allocator | |
[ 183.262295] [drm] Found EDID data blob. | |
[ 183.317152] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:01.0 on minor 0 | |
[ 183.459526] fbcon: bochs-drmdrmfb (fb0) is primary device | |
[ 183.698445] Console: switching to colour frame buffer device 128x48 | |
[ 183.837045] bochs-drm 0000:00:01.0: fb0: bochs-drmdrmfb frame buffer device | |
[[0;32m OK [0m] Started [0;1;39mHostname Service[0m. | |
[ 192.644591] cloud-init[605]: Cloud-init v. 19.4 running 'init-local' at Mon, 20 Jun 2022 00:21:45 +0000. Up 166.25 seconds. | |
[[0;32m OK [0m] Finished [0;1;39mInitial cloud-init job (pre-networking)[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mNetwork (Pre)[0m. | |
Starting [0;1;39mNetwork Manager[0m... | |
[[0;32m OK [0m] Started [0;1;39mNetwork Manager[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mNetwork[0m. | |
Starting [0;1;39mNetwork Manager Wait Online[0m... | |
Starting [0;1;39mInitial cloud-ini… (metadata service crawler)[0m... | |
Starting [0;1;39mNetwork Manager Script Dispatcher Service[0m... | |
[[0;32m OK [0m] Started [0;1;39mNetwork Manager Script Dispatcher Service[0m. | |
[[0;32m OK [0m] Finished [0;1;39mNetwork Manager Wait Online[0m. | |
[ 211.249669] cloud-init[696]: Cloud-init v. 19.4 running 'init' at Mon, 20 Jun 2022 00:22:28 +0000. Up 208.62 seconds. | |
[ 211.252987] cloud-init[696]: ci-info: ++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++ | |
[ 211.257780] cloud-init[696]: ci-info: +--------+------+--------------------------+-------------+--------+-------------------+ | |
[ 211.264377] cloud-init[696]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | | |
[ 211.273743] cloud-init[696]: ci-info: +--------+------+--------------------------+-------------+--------+-------------------+ | |
[ 211.276902] cloud-init[696]: ci-info: | eth0 | True | 172.17.0.16 | 255.255.0.0 | global | 02:42:ac:11:00:10 | | |
[ 211.283679] cloud-init[696]: ci-info: | eth0 | True | fe80::42:acff:fe11:10/64 | . | link | 02:42:ac:11:00:10 | | |
[ 211.286122] cloud-init[696]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | | |
[ 211.294989] cloud-init[696]: ci-info: | lo | True | ::1/128 | . | host | . | | |
[ 211.301436] cloud-init[696]: ci-info: +--------+------+--------------------------+-------------+--------+-------------------+ | |
[ 211.305397] cloud-init[696]: ci-info: +++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++ | |
[ 211.314464] cloud-init[696]: ci-info: +-------+-------------+------------+-------------+-----------+-------+ | |
[ 211.322486] cloud-init[696]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | | |
[ 211.330524] cloud-init[696]: ci-info: +-------+-------------+------------+-------------+-----------+-------+ | |
[ 211.334003] cloud-init[696]: ci-info: | 0 | 0.0.0.0 | 172.17.0.1 | 0.0.0.0 | eth0 | UG | | |
[ 211.340915] cloud-init[696]: ci-info: | 1 | 172.17.0.0 | 0.0.0.0 | 255.255.0.0 | eth0 | U | | |
[ 211.347664] cloud-init[696]: ci-info: +-------+-------------+------------+-------------+-----------+-------+ | |
[ 211.353410] cloud-init[696]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ | |
[ 211.357932] cloud-init[696]: ci-info: +-------+-------------+---------+-----------+-------+ | |
[ 211.366702] cloud-init[696]: ci-info: | Route | Destination | Gateway | Interface | Flags | | |
[ 211.373357] cloud-init[696]: ci-info: +-------+-------------+---------+-----------+-------+ | |
[ 211.376593] cloud-init[696]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | | |
[ 211.382974] cloud-init[696]: ci-info: | 3 | local | :: | eth0 | U | | |
[ 211.386883] cloud-init[696]: ci-info: | 4 | ff00::/8 | :: | eth0 | U | | |
[ 211.393891] cloud-init[696]: ci-info: +-------+-------------+---------+-----------+-------+ | |
[ 228.053205] cloud-init[696]: Generating public/private rsa key pair. | |
[ 228.056391] cloud-init[696]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key | |
[ 228.058211] cloud-init[696]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub | |
[ 228.059580] cloud-init[696]: The key fingerprint is: | |
[ 228.060906] cloud-init[696]: SHA256:6XzsvI9ulumJLWDV361hGT8xZW7UyA5U4K+zEGolFIs root@instance | |
[ 228.066589] cloud-init[696]: The key's randomart image is: | |
[ 228.070742] cloud-init[696]: +---[RSA 3072]----+ | |
[ 228.072383] cloud-init[696]: | . .o+...| | |
[ 228.078655] cloud-init[696]: | . o .. o =| | |
[ 228.086587] cloud-init[696]: | E o. .o +.| | |
[ 228.089644] cloud-init[696]: | .... .ooo| | |
[ 228.096450] cloud-init[696]: | .S o. ..*o| | |
[ 228.098011] cloud-init[696]: | oo = ...=.o| | |
[ 228.102917] cloud-init[696]: | . .= +oo. o.| | |
[ 228.113240] cloud-init[696]: | ..*=+ o. | | |
[ 228.115570] cloud-init[696]: | .*Xoo | | |
[ 228.125228] cloud-init[696]: +----[SHA256]-----+ | |
[ 228.126985] cloud-init[696]: Generating public/private dsa key pair. | |
[ 228.128208] cloud-init[696]: Your identification has been saved in /etc/ssh/ssh_host_dsa_key | |
[ 228.134466] cloud-init[696]: Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub | |
[ 228.142757] cloud-init[696]: The key fingerprint is: | |
[ 228.150589] cloud-init[696]: SHA256:TphuMvMaRDyebSBD8MmbnfqPBcnGnPI0jpLGlF9Hseo root@instance | |
[ 228.152614] cloud-init[696]: The key's randomart image is: | |
[ 228.160591] cloud-init[696]: +---[DSA 1024]----+ | |
[ 228.163855] cloud-init[696]: |... . | | |
[ 228.170796] cloud-init[696]: | + o o | | |
[ 228.172422] cloud-init[696]: | * = o | | |
[ 228.175751] cloud-init[696]: | .% Ooo | | |
[ 228.182588] cloud-init[696]: | o+ ^o=.S | | |
[ 228.184749] cloud-init[696]: |o..Xo=.o | | |
[ 228.188936] cloud-init[696]: |ooo.BE+ . | | |
[ 228.195169] cloud-init[696]: |.. . X | | |
[ 228.196702] cloud-init[696]: | +oo | | |
[ 228.205330] cloud-init[696]: +----[SHA256]-----+ | |
[ 228.208756] cloud-init[696]: Generating public/private ecdsa key pair. | |
[ 228.215630] cloud-init[696]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key | |
[ 228.220951] cloud-init[696]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub | |
[ 228.231922] cloud-init[696]: The key fingerprint is: | |
[ 228.233762] cloud-init[696]: SHA256:ddm+K9kzJLXQf07Ev01eZPBb9SXPefu+LH7YhxESuAQ root@instance | |
[ 228.241696] cloud-init[696]: The key's randomart image is: | |
[ 228.243487] cloud-init[696]: +---[ECDSA 256]---+ | |
[ 228.247896] cloud-init[696]: | E. . | | |
[ 228.255854] cloud-init[696]: | o .oo o| | |
[ 228.257762] cloud-init[696]: | ...oo.B=| | |
[ 228.277929] cloud-init[696]: | ...o.+.%| | |
[ 228.292897] cloud-init[696]: | S +.B*| | |
[ 228.299672] cloud-init[696]: | . ++B| | |
[ 228.307614] cloud-init[696]: | =+B*| | |
[ 228.315523] cloud-init[696]: | oo*=*| | |
[ 228.317793] cloud-init[696]: | .o+*+| | |
[ 228.325756] cloud-init[696]: +----[SHA256]-----+ | |
[ 228.327469] cloud-init[696]: Generating public/private ed25519 key pair. | |
[ 228.332977] cloud-init[696]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key | |
[ 228.342000] cloud-init[696]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub | |
[ 228.349997] cloud-init[696]: The key fingerprint is: | |
[ 228.359360] cloud-init[696]: SHA256:KZ0gHgUJF+YjMFqFGvqEuY83YMUdpK+dlfUVRmkzfpQ root@instance | |
[ 228.362776] cloud-init[696]: The key's randomart image is: | |
[ 228.367659] cloud-init[696]: +--[ED25519 256]--+ | |
[ 228.374724] cloud-init[696]: |o o+B=. .+. .| | |
[ 228.378868] cloud-init[696]: |o+.+oo .=.E | | |
[ 228.385407] cloud-init[696]: |o=o.* o . o.+ | | |
[ 228.389592] cloud-init[696]: |= .=.= ooo. .. . | | |
[ 228.396464] cloud-init[696]: | +. ...oS . . | | |
[ 228.400734] cloud-init[696]: |o.. o o. | | |
[ 228.408527] cloud-init[696]: |.+ . o | | |
[ 228.417773] cloud-init[696]: |. + | | |
[ 228.419556] cloud-init[696]: | . . | | |
[ 228.420977] cloud-init[696]: +----[SHA256]-----+ | |
[[0;32m OK [0m] Finished [0;1;39mInitial cloud-ini…ob (metadata service crawler)[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mCloud-config availability[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mNetwork is Online[0m. | |
Starting [0;1;39mApply the settings specified in cloud-config[0m... | |
Starting [0;1;39mOpenSSH server daemon[0m... | |
Starting [0;1;39mPermit User Sessions[0m... | |
[[0;32m OK [0m] Finished [0;1;39mPermit User Sessions[0m. | |
[[0;32m OK [0m] Started [0;1;39mGetty on tty1[0m. | |
[[0;32m OK [0m] Started [0;1;39mSerial Getty on ttyS0[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mLogin Prompts[0m. | |
[[0;32m OK [0m] Started [0;1;39mOpenSSH server daemon[0m. | |
[[0;32m OK [0m] Reached target [0;1;39mMulti-User System[0m. | |
Starting [0;1;39mUpdate UTMP about System Runlevel Changes[0m... | |
[[0;32m OK [0m] Finished [0;1;39mUpdate UTMP about System Runlevel Changes[0m. | |
Fedora 32 (Cloud Edition) | |
Kernel 5.6.6-300.fc32.x86_64 on an x86_64 (ttyS0) | |
SSH host key: SHA256:kjxnLTfBHJQLAcnMjXRITIA08S0J79ZvSXrW0v4OA6w (RSA) | |
SSH host key: SHA256:Twuyhfy53eQToF+rt/loDZlCR9aXpkvgbw1INRq2aHc (ECDSA) | |
SSH host key: SHA256:IsYIQTIPPrlv+yomMB8FKR1fq7Z68gbp42+29e4OJS4 (ED25519) | |
eth0: 172.17.0.16 fe80::42:acff:fe11:10 | |
instance login: [ 250.922800] cloud-init[763]: Cloud-init v. 19.4 running 'modules:config' at Mon, 20 Jun 2022 00:23:02 +0000. Up 242.91 seconds. | |
ci-info: +++Authorized keys from /home/molecule/.ssh/authorized_keys for user molecule++++ | |
ci-info: +---------+-------------------------------------------------+---------+---------+ | |
ci-info: | Keytype | Fingerprint (md5) | Options | Comment | | |
ci-info: +---------+-------------------------------------------------+---------+---------+ | |
ci-info: | ssh-rsa | 77:db:f2:17:d6:4f:8e:a8:19:3f:07:1e:f1:eb:40:0d | - | - | | |
ci-info: +---------+-------------------------------------------------+---------+---------+ | |
<14>Jun 20 00:23:34 ec2: | |
<14>Jun 20 00:23:35 ec2: ############################################################# | |
<14>Jun 20 00:23:35 ec2: -----BEGIN SSH HOST KEY FINGERPRINTS----- | |
<14>Jun 20 00:23:35 ec2: 1024 SHA256:TphuMvMaRDyebSBD8MmbnfqPBcnGnPI0jpLGlF9Hseo root@instance (DSA) | |
<14>Jun 20 00:23:35 ec2: 256 SHA256:ddm+K9kzJLXQf07Ev01eZPBb9SXPefu+LH7YhxESuAQ root@instance (ECDSA) | |
<14>Jun 20 00:23:36 ec2: 256 SHA256:KZ0gHgUJF+YjMFqFGvqEuY83YMUdpK+dlfUVRmkzfpQ root@instance (ED25519) | |
<14>Jun 20 00:23:36 ec2: 3072 SHA256:6XzsvI9ulumJLWDV361hGT8xZW7UyA5U4K+zEGolFIs root@instance (RSA) | |
<14>Jun 20 00:23:36 ec2: -----END SSH HOST KEY FINGERPRINTS----- | |
<14>Jun 20 00:23:36 ec2: ############################################################# | |
-----BEGIN SSH HOST KEY KEYS----- | |
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAifR1N6BXi1FXOnfex6xZoMyJYlZ/+DduZkcX/DKT0FVSDla4eOJsW7eHFJAgVol9DfVnYnni0Rrqq+xn/wW/0= root@instance | |
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB4b1KZNAYqGeGZUTl779GZSMKdCphTac2B4lznUbvxm root@instance | |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZCfdLBkI388MYx9r3/0l0T/Xvlh4AFWbeCZoTLhcaWZpPxMazf0r1VuxR4MEHKNiuIRJp06+jjA6dvaBEq+v5WhOhwTeGmhoGUfk5CRlUMTWDwK6O/OngHOKi4CpJwX0WrO3ul6Lkb449KtQjHyC15wo1Ju7FHtEJaPirbQfy8IAHY7+KV35pOT2Njb11YLBOsMWOiQipTPExiGbrB7ds445TKSuaKesa1iIcTNNnCCxUcalfeeYeQPuWQjRA9Gry+jlW/qR/TChAfhM/OMw9odpjISC/R8zXJkiSjeSks+wn//WwsYTxd+SW0m2s5ErS3rP/gm156oYzX5KMJ5B1tmWniMXaX7BKYJKqb+L6pT94HWGlDjAmJ5mn8qWuWw21fBaNIvUymZoc93//Qsh9DfspPHWpPQ8IBkoV77HVY2zXz9Z8xJl79LUe9LLHgDIL/EzVjNeF0tl3L3XUPEpUELjpmDcXuexHLZom2EUPwAwz6BDEurmCo2IoK0y5iTE= root@instance | |
-----END SSH HOST KEY KEYS----- | |
[ 278.747618] cloud-init[836]: Cloud-init v. 19.4 running 'modules:final' at Mon, 20 Jun 2022 00:23:29 +0000. Up 269.34 seconds. | |
[ 278.846704] cloud-init[836]: Cloud-init v. 19.4 finished at Mon, 20 Jun 2022 00:23:37 +0000. Datasource DataSourceNoCloud [seed=/dev/vdb][dsmode=net]. Up 277.52 seconds | |
You were disconnected from the console. This has one of the following reasons: | |
- another user connected to the console of the target vm | |
- network issues | |
websocket: close 1006 (abnormal closure): unexpected EOF | |
Script done, file is typescript |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","hostname":"minikube","level":"info","pos":"virt-handler.go:194","timestamp":"2022-06-20T00:14:46.839928Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: W0620 00:14:46.842013 7737 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:46.857054Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:46.857222Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:46.857326Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-06-20T00:14:46.897739Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-06-20T00:14:46.897777Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"error","msg":"something happened during opening kvm file: open /dev/kvm: no such file or directory","pos":"kvm-caps-info-plugin_amd64.go:226","timestamp":"2022-06-20T00:14:46.898461Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-06-20T00:14:46.898580Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:46.898719Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:46.898748Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Starting domain stats collector: node name=minikube","pos":"prometheus.go:446","timestamp":"2022-06-20T00:14:46.899992Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"metrics: max concurrent requests=3","pos":"virt-handler.go:477","timestamp":"2022-06-20T00:14:46.900244Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"certificate with common name 'kubevirt.io:system:client:virt-handler' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:14:46.900936Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"STARTING informer vmiInformer-sources","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:46.900340Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"STARTING informer vmiInformer-targets","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:46.901164Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"STARTING informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:46.901200Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:46.901260Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:46.901294Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"certificate with common name 'kubevirt.io:system:node:virt-handler' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:14:46.908244Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:46.989055Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:46.989316Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:46.989551Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:47.098928Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:47.099140Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:47.099300Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:47.099368Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Starting virt-handler controller.","pos":"vm.go:1298","timestamp":"2022-06-20T00:14:47.190673Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Starting a device plugin for device: kvm","pos":"device_controller.go:56","timestamp":"2022-06-20T00:14:47.190797Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Starting a device plugin for device: tun","pos":"device_controller.go:56","timestamp":"2022-06-20T00:14:47.190827Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Starting a device plugin for device: vhost-net","pos":"device_controller.go:56","timestamp":"2022-06-20T00:14:47.190842Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Starting a device plugin for device: sev","pos":"device_controller.go:56","timestamp":"2022-06-20T00:14:47.190856Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"refreshed device plugins for permitted/forbidden host devices","pos":"device_controller.go:292","timestamp":"2022-06-20T00:14:47.190881Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"enabled device-plugins for: []","pos":"device_controller.go:293","timestamp":"2022-06-20T00:14:47.190895Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"disabled device-plugins for: []","pos":"device_controller.go:294","timestamp":"2022-06-20T00:14:47.190905Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"refreshed device plugins for permitted/forbidden host devices","pos":"device_controller.go:292","timestamp":"2022-06-20T00:14:47.190923Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"enabled device-plugins for: []","pos":"device_controller.go:293","timestamp":"2022-06-20T00:14:47.190935Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"disabled device-plugins for: []","pos":"device_controller.go:294","timestamp":"2022-06-20T00:14:47.190944Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:47.200265Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:47.200506Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:14:47.214510Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:14:47.214573Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"kvm device plugin started","pos":"generic_device.go:158","timestamp":"2022-06-20T00:14:47.216899Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"warning","msg":"device '/dev/kvm' is not present, the device plugin can't expose it.","pos":"generic_device.go:304","timestamp":"2022-06-20T00:14:47.217138Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"sev device plugin started","pos":"generic_device.go:158","timestamp":"2022-06-20T00:14:47.228590Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"warning","msg":"device '/dev/sev' is not present, the device plugin can't expose it.","pos":"generic_device.go:304","timestamp":"2022-06-20T00:14:47.228847Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"device '/dev/sev' is present.","pos":"generic_device.go:307","timestamp":"2022-06-20T00:14:47.228885Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"tun device plugin started","pos":"generic_device.go:158","timestamp":"2022-06-20T00:14:47.231777Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"device '/dev/net/tun' is present.","pos":"generic_device.go:307","timestamp":"2022-06-20T00:14:47.231949Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"device '/dev/kvm' is present.","pos":"generic_device.go:307","timestamp":"2022-06-20T00:14:47.233278Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"vhost-net device plugin started","pos":"generic_device.go:158","timestamp":"2022-06-20T00:14:47.304835Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"device '/dev/vhost-net' is present.","pos":"generic_device.go:307","timestamp":"2022-06-20T00:14:47.305142Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: W0620 00:15:00.289516 7737 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.407939Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:15:20.409130Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:15:20.409212Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"refreshed device plugins for permitted/forbidden host devices","pos":"device_controller.go:292","timestamp":"2022-06-20T00:15:20.408631Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"enabled device-plugins for: []","pos":"device_controller.go:293","timestamp":"2022-06-20T00:15:20.409494Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"disabled device-plugins for: []","pos":"device_controller.go:294","timestamp":"2022-06-20T00:15:20.409548Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.444165Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"refreshed device plugins for permitted/forbidden host devices","pos":"device_controller.go:292","timestamp":"2022-06-20T00:15:25.450904Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"enabled device-plugins for: []","pos":"device_controller.go:293","timestamp":"2022-06-20T00:15:25.452892Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"disabled device-plugins for: []","pos":"device_controller.go:294","timestamp":"2022-06-20T00:15:25.452968Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-06-20T00:15:25.451671Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-06-20T00:15:25.456420Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Generic Allocate: resourceName: tun","pos":"generic_device.go:244","timestamp":"2022-06-20T00:18:31.856150Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Generic Allocate: request: [\u0026ContainerAllocateRequest{DevicesIDs:[tun11],}]","pos":"generic_device.go:245","timestamp":"2022-06-20T00:18:31.856249Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"instance","namespace":"default","pos":"vm.go:1553","timestamp":"2022-06-20T00:18:46.349222Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:46.349713Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:46.350156Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"instance","namespace":"default","pos":"vm.go:1553","timestamp":"2022-06-20T00:18:46.376942Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:46.377292Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:46.377324Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain does not exist","name":"instance","namespace":"default","pos":"vm.go:1553","timestamp":"2022-06-20T00:18:47.349712Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Bind mounting container disk at /var/lib/docker/overlay2/9a21a86f77d9475b4c53b9880213d326d4219fcc1c77190a3ef23791ed1da8c7/merged/disk/downloaded to /var/lib/kubelet/pods/c82e2a23-8bb4-45f8-a2b3-c33b5d2618a2/volumes/kubernetes.io~empty-dir/container-disks/disk_0.img","name":"instance","namespace":"default","pos":"mount.go:274","timestamp":"2022-06-20T00:18:47.361175Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"mounting kernel artifacts","name":"instance","namespace":"default","pos":"mount.go:407","timestamp":"2022-06-20T00:18:47.468223Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"kernel boot not defined - nothing to mount","name":"instance","namespace":"default","pos":"mount.go:410","timestamp":"2022-06-20T00:18:47.468286Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"updated MAC for eth0-nic interface: old: 02:42:ac:11:00:10 -\u003e new: 02:42:ac:9b:ab:f8","pos":"common.go:347","timestamp":"2022-06-20T00:18:47.534415Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"[ContextExecutor]: Executing... Switching from original () to desired () context","pos":"context_executor.go:75","timestamp":"2022-06-20T00:18:47.563404Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"[ContextExecutor]: Execution ended successfully","pos":"context_executor.go:90","timestamp":"2022-06-20T00:18:47.627247Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Created tap device: tap0 in PID: 12563","pos":"common.go:418","timestamp":"2022-06-20T00:18:47.627305Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"Successfully configured tap device: tap0","pos":"common.go:457","timestamp":"2022-06-20T00:18:47.629036Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Accepted new notify pipe connection for vmi","name":"instance","namespace":"default","pos":"vm.go:348","timestamp":"2022-06-20T00:18:47.938605Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Domain is in state Shutoff reason Unknown","name":"instance","namespace":"default","pos":"vm.go:2728","timestamp":"2022-06-20T00:18:47.947302Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Domain is in state Paused reason StartingUp","name":"instance","namespace":"default","pos":"vm.go:2758","timestamp":"2022-06-20T00:18:48.226073Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.695717Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.695784Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Scheduled | Domain status: Paused, reason: StartingUp","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:18:48.695903Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.696048Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.696089Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Domain is in state Running reason Unknown","name":"instance","namespace":"default","pos":"vm.go:2758","timestamp":"2022-06-20T00:18:48.708812Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Running, reason: Unknown","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:18:48.734405Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.740358Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.740418Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"error","msg":"Updating the VirtualMachineInstance status failed.","name":"instance","namespace":"default","pos":"vm.go:1698","reason":"Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"instance\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:18:48.841012Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"re-enqueuing VirtualMachineInstance default/instance","pos":"vm.go:1344","reason":"Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"instance\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:18:48.841087Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Running, reason: Unknown","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:18:48.841334Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.853944Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.853998Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Running, reason: Unknown","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:18:48.876748Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.892174Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:18:48.892226Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"CA update in configmap kubevirt/kubevirt-ca detected. Updating from resource version -1 to 1178","pos":"ca-manager.go:96","timestamp":"2022-06-20T00:18:49.263574Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Websocket connection upgraded","name":"instance","namespace":"default","pos":"console.go:220","timestamp":"2022-06-20T00:18:49.282146Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Connecting to proc/12563/root/var/run/kubevirt-private/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/virt-serial0","name":"instance","namespace":"default","pos":"console.go:221","timestamp":"2022-06-20T00:18:49.282200Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Connected to proc/12563/root/var/run/kubevirt-private/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/virt-serial0","name":"instance","namespace":"default","pos":"console.go:231","timestamp":"2022-06-20T00:18:49.284981Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"resyncing virt-launcher domains","pos":"cache.go:384","timestamp":"2022-06-20T00:19:46.986808Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Running, reason: Unknown","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:19:46.994102Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:19:47.011600Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:19:47.011650Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Running, reason: Unknown","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:24:17.803075Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"ACPI feature not available, killing deleted VirtualMachineInstance instance","name":"instance","namespace":"default","pos":"vm.go:2006","timestamp":"2022-06-20T00:24:17.808827Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"error","msg":"error encountered reading from unix socket","name":"instance","namespace":"default","pos":"console.go:236","timestamp":"2022-06-20T00:24:17.897751Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"error","msg":"error encountered reading from client (virt-api) websocket","name":"instance","namespace":"default","pos":"console.go:242","reason":"read tcp 172.17.0.13:8186-\u003e172.17.0.9:51042: use of closed network connection","timestamp":"2022-06-20T00:24:17.898002Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:24:18.038953Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:24:18.039028Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Running, reason: Unknown","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:24:18.039191Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"ACPI feature not available, killing deleted VirtualMachineInstance instance","name":"instance","namespace":"default","pos":"vm.go:2006","timestamp":"2022-06-20T00:24:18.040386Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:24:18.057000Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:24:18.057074Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Domain is in state Shutoff reason Destroyed","name":"instance","namespace":"default","pos":"vm.go:2758","timestamp":"2022-06-20T00:24:18.061789Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Running | Domain status: Shutoff, reason: Destroyed","name":"instance","namespace":"default","pos":"vm.go:1551","timestamp":"2022-06-20T00:24:18.062112Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Signaled deletion for instance","name":"instance","namespace":"default","pos":"vm.go:2074","timestamp":"2022-06-20T00:24:18.074703Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of boot volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:24:18.095673Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"migration is block migration because of cloudinit volume","name":"instance","namespace":"default","pos":"vm.go:2215","timestamp":"2022-06-20T00:24:18.095755Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Domain is in state NoState reason NonExistent","name":"instance","namespace":"default","pos":"vm.go:2758","timestamp":"2022-06-20T00:24:18.099194Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Domain is marked for deletion","name":"instance","namespace":"default","pos":"vm.go:2762","timestamp":"2022-06-20T00:24:18.099254Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Succeeded | Domain does not exist","name":"instance","namespace":"default","pos":"vm.go:1553","timestamp":"2022-06-20T00:24:18.151647Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Performing final local cleanup for vmi with uid ea7e77f5-c65e-4a05-8537-bb5d108bb5b0","name":"instance","namespace":"default","pos":"vm.go:1809","timestamp":"2022-06-20T00:24:18.151727Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Found container disk mount entries","name":"instance","namespace":"default","pos":"mount.go:361","timestamp":"2022-06-20T00:24:18.151782Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Looking to see if containerdisk is mounted at path /var/lib/kubelet/pods/c82e2a23-8bb4-45f8-a2b3-c33b5d2618a2/volumes/kubernetes.io~empty-dir/container-disks/disk_0.img","name":"instance","namespace":"default","pos":"mount.go:364","timestamp":"2022-06-20T00:24:18.151850Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"unmounting container disk at path /var/lib/kubelet/pods/c82e2a23-8bb4-45f8-a2b3-c33b5d2618a2/volumes/kubernetes.io~empty-dir/container-disks/disk_0.img","name":"instance","namespace":"default","pos":"mount.go:368","timestamp":"2022-06-20T00:24:18.152025Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Cleaning up remaining hotplug volumes","name":"instance","namespace":"default","pos":"mount.go:708","timestamp":"2022-06-20T00:24:18.225433Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"instance","namespace":"default","pos":"vm.go:1843","timestamp":"2022-06-20T00:24:18.226252Z","uid":""} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"VMI is in phase: Succeeded | Domain does not exist","name":"instance","namespace":"default","pos":"vm.go:1553","timestamp":"2022-06-20T00:24:18.226348Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Performing final local cleanup for vmi with uid ea7e77f5-c65e-4a05-8537-bb5d108bb5b0","name":"instance","namespace":"default","pos":"vm.go:1809","timestamp":"2022-06-20T00:24:18.226381Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"No container disk mount entries found to unmount","name":"instance","namespace":"default","pos":"mount.go:357","timestamp":"2022-06-20T00:24:18.226431Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"Cleaning up remaining hotplug volumes","name":"instance","namespace":"default","pos":"mount.go:708","timestamp":"2022-06-20T00:24:18.226450Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"instance","namespace":"default","pos":"vm.go:1843","timestamp":"2022-06-20T00:24:18.226498Z","uid":""} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"closing notify pipe listener for vmi","name":"instance","namespace":"default","pos":"vm.go:304","timestamp":"2022-06-20T00:24:18.226540Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"","level":"info","msg":"gracefully closed notify pipe connection for vmi","name":"instance","namespace":"default","pos":"vm.go:365","timestamp":"2022-06-20T00:24:19.105285Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"VMI does not exist | Domain does not exist","pos":"vm.go:1558","timestamp":"2022-06-20T00:24:20.138091Z"} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"VirtualMachineInstance","level":"info","msg":"Performing final local cleanup for vmi with uid ","name":"instance","namespace":"default","pos":"vm.go:1809","timestamp":"2022-06-20T00:24:20.138151Z","uid":""} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","kind":"Domain","level":"info","msg":"Removing domain from cache during final cleanup","name":"instance","namespace":"default","pos":"vm.go:1843","timestamp":"2022-06-20T00:24:20.138185Z","uid":""} | |
kubevirt/virt-handler-49ttn[virt-handler]: {"component":"virt-handler","level":"info","msg":"resyncing virt-launcher domains","pos":"cache.go:384","timestamp":"2022-06-20T00:24:46.984165Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: W0620 00:12:53.960037 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: W0620 00:12:53.960992 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"we are on kubernetes","pos":"application.go:236","timestamp":"2022-06-20T00:12:54.027930Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"servicemonitor is not defined","pos":"application.go:252","timestamp":"2022-06-20T00:12:54.050515Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: I0620 00:12:55.179831 1 request.go:665] Waited for 1.126192874s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta2?timeout=32s | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"prometheusrule is not defined","pos":"application.go:267","timestamp":"2022-06-20T00:12:56.183102Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Operator image: quay.io/kubevirt/virt-operator:v0.52.0","pos":"application.go:281","timestamp":"2022-06-20T00:12:56.671403Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorMutatingWebhookInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.672136Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer FakeOperatorPrometheusRuleInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.672880Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.672895Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.672906Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorClusterRoleInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.672918Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorPodsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.711391Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorPodDisruptionBudgetInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712795Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer FakeOperatorSCC","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712813Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorRoleBindingInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712825Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorServiceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712839Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorRoleInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712849Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorCRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712867Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorValidatingWebhookInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712888Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer namespaceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712897Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer extensionsConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712912Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorServiceAccountInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712930Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorClusterRoleBindingInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712939Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorAPIServiceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712954Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer installStrategyConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712970Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer installStrategyJobsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712980Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer secretsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.712995Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.713011Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer FakeOperatorServiceMonitor","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.713028Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorDeploymentInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.713039Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorDaemonSetInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.713052Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:406","timestamp":"2022-06-20T00:12:56.813591Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: I0620 00:12:56.813648 1 leaderelection.go:248] attempting to acquire leader lease kubevirt/virt-operator... | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: I0620 00:12:56.825213 1 leaderelection.go:258] successfully acquired lease kubevirt/virt-operator | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Started leading","pos":"application.go:386","timestamp":"2022-06-20T00:12:56.835049Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Starting KubeVirt controller.","pos":"kubevirt.go:545","timestamp":"2022-06-20T00:12:56.835107Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:13:06.796381Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:13:06.796559Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Install strategy config map not loaded. reason: no install strategy configmap found for version v0.52.0 with registry quay.io/kubevirt","pos":"kubevirt.go:879","timestamp":"2022-06-20T00:13:06.797135Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Created job to generate install strategy configmap for version v0.52.0","pos":"kubevirt.go:946","timestamp":"2022-06-20T00:13:06.803694Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '807'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:06.812402Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:13:11.808316Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:13:11.808623Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Install strategy config map not loaded. reason: no install strategy configmap found for version v0.52.0 with registry quay.io/kubevirt","pos":"kubevirt.go:879","timestamp":"2022-06-20T00:13:11.808903Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"error","msg":"Waiting on install strategy to be posted from job kubevirt-72d62fe25180ebc296d7a30b4ba2508933d9c2fe-jobk2jkc","name":"kubevirt-72d62fe25180ebc296d7a30b4ba2508933d9c2fe-jobk2jkc","namespace":"kubevirt","pos":"kubevirt.go:931","timestamp":"2022-06-20T00:13:11.808929Z","uid":"57b82828-0632-46b0-9a54-c54aaa0ef81f"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:13:18.474801Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:13:18.474892Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ServiceAccount loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.488718Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ServiceAccount loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.488857Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ServiceAccount loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.488951Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.489253Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.489866Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.490494Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.490930Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.491623Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.492703Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.493106Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.493397Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.493555Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.493704Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.493850Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.493995Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Role loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.494191Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Role loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.494331Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"RoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.494513Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"RoleBinding loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.494668Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.542140Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.558247Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.622247Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.683908Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.685502Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.686905Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.719476Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.720725Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.721942Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.723118Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.750084Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"CustomResourceDefinition loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.751383Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Service loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.751758Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Service loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.751933Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Service loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752091Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Secret loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752208Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Secret loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752285Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Secret loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752364Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Secret loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752436Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Secret loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752512Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Secret loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.752585Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ValidatingWebhookConfiguration loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.753097Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ValidatingWebhookConfiguration loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.755260Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"MutatingWebhookConfiguration loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.755904Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"APIService loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.756159Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"APIService loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.756309Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Deployment loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.757483Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Deployment loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.758181Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"DaemonSet loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.759684Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"SecurityContextConstraints loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.760165Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"SecurityContextConstraints loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.760470Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ConfigMap loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.760613Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ConfigMap loaded","pos":"strategy.go:712","timestamp":"2022-06-20T00:13:18.760682Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Loaded install strategy for kubevirt version v0.52.0 into cache","pos":"kubevirt.go:875","timestamp":"2022-06-20T00:13:18.760696Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Garbage collected completed install strategy job","name":"kubevirt-72d62fe25180ebc296d7a30b4ba2508933d9c2fe-jobk2jkc","namespace":"kubevirt","pos":"kubevirt.go:795","timestamp":"2022-06-20T00:13:18.772192Z","uid":"57b82828-0632-46b0-9a54-c54aaa0ef81f"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-api to roll over to latest version","pos":"reconcile.go:293","timestamp":"2022-06-20T00:13:18.777118Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:13:18.777158Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:13:18.777172Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Validation webhook created for image v0.52.0 and registry quay.io/kubevirt","pos":"reconcile.go:409","timestamp":"2022-06-20T00:13:18.982696Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachineinstances.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:19.317743Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachineinstancepresets.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:19.536710Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachineinstancereplicasets.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:19.880872Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachines.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:20.282100Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachineinstancemigrations.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:20.552499Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachinesnapshots.snapshot.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:20.793458Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachinesnapshotcontents.snapshot.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:21.043232Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachinerestores.snapshot.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:21.224301Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachineflavors.flavor.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:21.268041Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachineclusterflavors.flavor.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:21.319616Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd virtualmachinepools.pool.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:21.535089Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"crd migrationpolicies.migrations.kubevirt.io created","pos":"crds.go:94","timestamp":"2022-06-20T00:13:21.641089Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"serviceaccount kubevirt-apiserver created","pos":"core.go:447","timestamp":"2022-06-20T00:13:21.673936Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"serviceaccount kubevirt-controller created","pos":"core.go:447","timestamp":"2022-06-20T00:13:21.762045Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"serviceaccount kubevirt-handler created","pos":"core.go:447","timestamp":"2022-06-20T00:13:21.778023Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt.io:default created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:21.867900Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt.io:admin created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:21.912819Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt.io:edit created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:22.099820Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt.io:view created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:22.282788Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt-apiserver created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:22.484042Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt-controller created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:22.984203Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRole kubevirt-handler created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:23.262433Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding kubevirt.io:default created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:23.544977Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding kubevirt-apiserver created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.002859Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding kubevirt-apiserver-auth-delegator created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.175330Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding kubevirt-controller created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.244556Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"ClusterRoleBinding kubevirt-handler created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.292473Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Role kubevirt-apiserver created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.349686Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Role kubevirt-handler created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.397545Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"RoleBinding kubevirt-apiserver created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.427911Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"RoleBinding kubevirt-handler created","pos":"rbac.go:59","timestamp":"2022-06-20T00:13:24.472032Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"mutatingwebhoookconfiguration virt-api-mutator created","pos":"admissionregistration.go:311","timestamp":"2022-06-20T00:13:25.005892Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"poddisruptionbudget virt-api-pdb created","pos":"apps.go:351","timestamp":"2022-06-20T00:13:29.982546Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:13:29.982618Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1233'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:30.027088Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"error","msg":"Could not patch the KubeVirt finalizers.","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:683","reason":"Internal error occurred: failed calling webhook \"kubevirt-update-validator.kubevirt.io\": failed to call webhook: Post \"https://kubevirt-operator-webhook.kubevirt.svc:443/kubevirt-validate-update?timeout=10s\": dial tcp 10.110.244.254:443: i/o timeout","timestamp":"2022-06-20T00:13:40.023724Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"error","msg":"reenqueuing KubeVirt kubevirt/kubevirt","pos":"kubevirt.go:593","reason":"Internal error occurred: failed calling webhook \"kubevirt-update-validator.kubevirt.io\": failed to call webhook: Post \"https://kubevirt-operator-webhook.kubevirt.svc:443/kubevirt-validate-update?timeout=10s\": dial tcp 10.110.244.254:443: i/o timeout","timestamp":"2022-06-20T00:13:40.023786Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:13:40.023836Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:13:40.023901Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-api to roll over to latest version","pos":"reconcile.go:293","timestamp":"2022-06-20T00:13:40.026925Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:13:40.026978Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:13:40.026992Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:40.047926Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:40.050630Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:40.053409Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:13:40.065709Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1293'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:40.085897Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:13:45.024705Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:13:45.024819Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-api to roll over to latest version","pos":"reconcile.go:293","timestamp":"2022-06-20T00:13:45.028036Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:13:45.028075Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:13:45.028103Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:45.046171Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:45.049913Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:45.054340Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:13:45.065679Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:13:54.422616Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:13:54.436866Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-api to roll over to latest version","pos":"reconcile.go:293","timestamp":"2022-06-20T00:13:54.443625Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:13:54.443660Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:13:54.443671Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:54.463862Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:54.468022Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:13:54.472174Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:13:54.554843Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:14:15.071583Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:14:15.071705Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:14:15.074901Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:14:15.074935Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Temporary blocking validation webhook virt-operator-tmp-webhooksrchc deleted","pos":"delete.go:85","timestamp":"2022-06-20T00:14:15.085903Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:15.107584Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:15.111005Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:15.114144Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"poddisruptionbudget virt-controller-pdb created","pos":"apps.go:351","timestamp":"2022-06-20T00:14:15.146635Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Patching namespace kubevirt with {\"openshift.io/cluster-monitoring\":\"true\"}","pos":"core.go:67","timestamp":"2022-06-20T00:14:15.164759Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"kubevirt namespace labels patched","pos":"core.go:78","timestamp":"2022-06-20T00:14:15.175683Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:14:15.181878Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:15.370484Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:14:20.086477Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:14:20.087645Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:14:20.103330Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:14:20.103385Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:20.173223Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:20.336763Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:20.341523Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:14:20.353512Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:14:20.353545Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:14:26.610510Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:14:26.610634Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on deployment virt-controller to roll over to latest version","pos":"reconcile.go:305","timestamp":"2022-06-20T00:14:26.614027Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:14:26.614062Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:26.623731Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:26.628303Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:26.631271Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:14:26.656615Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:14:26.656654Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:14:38.874846Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:14:38.874983Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:14:38.878146Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:38.887361Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:38.891475Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:38.894089Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:14:38.905822Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:14:38.905862Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:14:49.080642Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:14:49.081163Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:14:49.084572Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:49.098282Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:49.106829Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:14:49.110500Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:14:49.130736Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:14:49.130776Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:15:00.310897Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:15:00.311082Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"reconcile.go:317","timestamp":"2022-06-20T00:15:00.314717Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:00.325264Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:00.330324Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:00.333260Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:15:00.344460Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Processed deployment for this round","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1072","timestamp":"2022-06-20T00:15:00.344494Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:15:20.326846Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:15:20.326965Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:20.342655Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:20.348014Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:20.355080Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:15:20.371880Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"All KubeVirt resources created","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1060","timestamp":"2022-06-20T00:15:20.379264Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"All KubeVirt components ready","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1064","timestamp":"2022-06-20T00:15:20.379320Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.413226Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:15:25.420068Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:15:25.420206Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.450459Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:25.458068Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:25.465327Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:25.470587Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:15:25.488446Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"All KubeVirt resources created","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1060","timestamp":"2022-06-20T00:15:25.534440Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"All KubeVirt components ready","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1064","timestamp":"2022-06-20T00:15:25.534514Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:641","timestamp":"2022-06-20T00:15:30.451860Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"Handling deployment","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:995","timestamp":"2022-06-20T00:15:30.452241Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-prometheus-metrics patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:30.488250Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service virt-api patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:30.495500Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"service kubevirt-operator-webhook patched","pos":"core.go:141","timestamp":"2022-06-20T00:15:30.500165Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","level":"info","msg":"Kubevirt namespace (kubevirt) labels are in sync","pos":"core.go:57","timestamp":"2022-06-20T00:15:30.517646Z"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"All KubeVirt resources created","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1060","timestamp":"2022-06-20T00:15:30.525701Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-operator-564f568975-9kbf5[virt-operator]: {"component":"virt-operator","kind":"","level":"info","msg":"All KubeVirt components ready","name":"kubevirt","namespace":"kubevirt","pos":"kubevirt.go:1064","timestamp":"2022-06-20T00:15:30.526338Z","uid":"5139ce79-3891-4bba-a9bf-6fa86b259028"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: W0620 00:13:56.242396 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: W0620 00:13:56.243379 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer limitrangeInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257173Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmRestoreInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257228Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257241Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer extensionsConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257251Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257261Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257272Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmiPresetInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.257283Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer CRDInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257294Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmiPresetInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257304Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer limitrangeInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257313Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmRestoreInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257322Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer kubeVirtInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257331Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257339Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.257350Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"CDI detected, DataSource integration enabled","pos":"api.go:932","timestamp":"2022-06-20T00:13:56.658067Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer extensionsConfigMapInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658134Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658150Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer dataSourceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.658159Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer kubeVirtInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658170Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer CRDInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658178Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer vmiPresetInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658186Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer limitrangeInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658195Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer vmRestoreInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:56.658203Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmFlavorInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.658212Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmClusterFlavorInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:56.658222Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer dataSourceInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658233Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658242Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658254Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmiPresetInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658263Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer limitrangeInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658272Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmRestoreInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658280Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmFlavorInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658288Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmClusterFlavorInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658297Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer kubeVirtInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658306Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer CRDInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:56.658315Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1293'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:56.658449Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:56.658501Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:56.658519Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:56.658550Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:56.658569Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:56.658604Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:56.658619Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:56.658629Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:56.658673Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:56.658683Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:56.658709Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:56.658769Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:56.658781Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:56.658791Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"certificate with common name 'virt-api.kubevirt.pod.cluster.local' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:13:56.758919Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"certificate with common name 'kubevirt.io:system:client:virt-handler' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:13:56.759127Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"CA update in configmap kube-system/extension-apiserver-authentication detected. Updating from resource version -1 to 40","pos":"ca-manager.go:96","timestamp":"2022-06-20T00:14:10.031033Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: W0620 00:14:10.034371 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.036676Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.373408Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.434277Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.436670Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.487577Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.473545Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.474700Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.441383Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.481178Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.478718Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.491863Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.492165Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.493071Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.495809Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.749319Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:11.145492Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:11.616424Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:13.041176Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:13.918223Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:14.950827Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:15.010372Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:15.290153Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:14:15.290202Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:14:15.290219Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:14:15.336241Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:15.345431Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:16.216627Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:17.642884Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:18.815988Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:19.941153Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:20.044929Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:20.816646Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:22.245406Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:23.124889Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:23.652562Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:24.543659Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:26.840837Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.527329Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.603828Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.604926Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.609039Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.621722Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:29.140367Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:30.039463Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:31.440860Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:33.740478Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:36.044483Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:38.341212Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:40.034355Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:40.540443Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:44.972115Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:45.024412Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:50.035148Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.498921Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.502654Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.517083Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.542727Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:00.034381Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:10.042472Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:11.225206Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:15.008036Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:15.036203Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:20.035364Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.418734Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:15:20.418795Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:15:20.418811Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:15:20.418840Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.442006Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:15:25.442057Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:15:25.442073Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:15:25.442107Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.485126Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.494052Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.497284Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.497622Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.504188Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.844115Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:30.033880Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:30.143956Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:32.448776Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:34.693513Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:37.044087Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:39.343300Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:40.035306Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:45.066312Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:45.071115Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:50.035130Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.483865Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.495848Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.496104Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.503820Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.504511Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.511561Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:00.036294Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:10.035373Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:11.250439Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:15.111339Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:15.124702Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:20.035769Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.534585Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.537091Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.540571Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.540923Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.546138Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:30.036204Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:40.034699Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:40.310686Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:42.661552Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:44.911623Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:45.167691Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:45.167939Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:47.215106Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:49.510787Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:50.035191Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:51.861341Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.616449Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.704063Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.714595Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.718528Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:00.035144Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:10.034996Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:11.284176Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:15.211662Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:15.231427Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:20.049266Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.515864Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.518926Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.533729Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.548442Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:30.035609Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:40.035034Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:45.296240Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:45.311748Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:45.312772Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:50.034715Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:52.751254Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:52.800775Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:55.051879Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:55.101182Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.351714Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.411762Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.504017Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.524809Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.539274Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.557954Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:59.650915Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:59.701448Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:00.034610Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:01.951149Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:02.000972Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:04.251593Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:04.301746Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:10.035090Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:11.314950Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:11.430424Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:15.351675Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:15.352044Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:15.367626Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:24671: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:18:15.371302Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:20.039677Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.505266Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.520608Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.523031Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.530523Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.535384Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.547787Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:30.038798Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:40.034511Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:45.407362Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:45.429003Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","kind":"VirtualMachineInstance","level":"error","msg":"Unable to establish connection to virt-handler","name":"instance","namespace":"default","pos":"dialers.go:63","reason":"Unable to connect to VirtualMachineInstance because phase is Scheduled instead of Running","timestamp":"2022-06-20T00:18:48.222351Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":225,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":400,"timestamp":"2022-06-20T00:18:48.223186Z","url":"/apis/subresources.kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/instance/console","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"CA update in configmap kubevirt/kubevirt-ca detected. Updating from resource version -1 to 1178","pos":"ca-manager.go:96","timestamp":"2022-06-20T00:18:49.276073Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:50.035274Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.522357Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.573384Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.587644Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.618641Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:00.036840Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:05.493412Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:07.733255Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:10.051290Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:10.082220Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:11.342065Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:12.394994Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:14.684080Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:15.453738Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:15.487013Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:16.991323Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:20.045052Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.510999Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:30.035268Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:40.035246Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:45.495803Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:39877: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:19:45.573872Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:45.596519Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:40548: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:19:45.619685Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:50.035153Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.544461Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.594596Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.628038Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.629320Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.700302Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:00.036559Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:10.035958Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:11.394933Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:15.565129Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:15.623399Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:18.290702Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:20.036200Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:20.540760Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:22.843792Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:24.991211Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.440992Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.564170Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.575748Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.596582Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.597021Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.614128Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:29.790486Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:30.034450Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:40.061239Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:45.636940Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:45.674340Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:50.035683Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.606621Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.616496Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.641140Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.646778Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:00.038274Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:10.035874Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:11.432727Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:15.661931Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:15.692842Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:20.095554Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.606194Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.643103Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.649217Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.675821Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.691572Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.707937Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:30.035976Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:30.728866Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:32.869638Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:35.328354Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:37.370077Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:39.919748Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:40.035209Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:42.221421Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:45.695475Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:45.726465Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:50.038088Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:57.475882Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:57.504731Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:57.587941Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.084548Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.100747Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.165703Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.186639Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:00.038382Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:10.034734Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:11.477365Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:11.710245Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:15.751754Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:15.755607Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:15.770097Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:15.775666Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:20.047756Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.509057Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.518401Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.537128Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.523083Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.545032Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:30.035045Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:40.035038Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:43.286504Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:43.335851Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:45.536365Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:45.588264Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:45.855795Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:45.877970Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:45.884558Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:47.837100Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:47.890685Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:50.036722Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:50.188318Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:50.243245Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:52.438086Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:52.487277Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:54.735663Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:54.788676Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.568011Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.583347Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.608420Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.647794Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:00.035361Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:10.034635Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:11.507281Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:11.737805Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:15.926133Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:15.954122Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:16.000007Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:20.051873Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.502684Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.536006Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:30.035833Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:40.038951Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:46.023870Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:46.027487Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:46.064051Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:46.069968Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:50.034681Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:55.736419Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:55.788687Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.611951Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.671526Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.687850Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.735607Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.744009Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:58.035879Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:58.087036Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:59.989676Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:00.036688Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:00.040987Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:02.635249Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:02.686213Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:04.635315Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:04.685557Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:07.237427Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:07.287137Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:10.036029Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:11.564868Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:11.763921Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:16.074387Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:16.079885Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:16.100222Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:16.104356Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:17.713297Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:17.747672Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:17.751453Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":0,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:17.900446Z","url":"/apis/subresources.kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/instance/console","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":223,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:17.923543Z","url":"/apis/subresources.kubevirt.io/v1/version","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:19.657176Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:20.040834Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:21.173458Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:22.348273Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:22.349375Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.507004Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.516416Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.516884Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.519691Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.543807Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:30.035698Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:32.064850Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:33.453507Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:34.790618Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:40.035028Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:40.512957Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:41.870537Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:43.380437Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:46.105766Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:46.107256Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:46.122330Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:46.122649Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:50.035794Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:54.001538Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:54.063221Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:54.063681Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:55.481804Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.621876Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.632454Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.632658Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.633033Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.635838Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.650557Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.662079Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.789771Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:59.579089Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:00.036653Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:00.965432Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:02.350857Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:03.727426Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:03.775082Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:03.775125Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:05.094794Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:06.367544Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:08.193383Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:08.244277Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:08.773923Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:08.786169Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: W0620 00:12:53.993679 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: W0620 00:12:53.993903 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"we are on kubernetes","pos":"application.go:236","timestamp":"2022-06-20T00:12:54.024103Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"servicemonitor is not defined","pos":"application.go:252","timestamp":"2022-06-20T00:12:54.067776Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: I0620 00:12:55.195279 1 request.go:665] Waited for 1.123905892s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"prometheusrule is not defined","pos":"application.go:267","timestamp":"2022-06-20T00:12:56.196589Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Operator image: quay.io/kubevirt/virt-operator:v0.52.0","pos":"application.go:281","timestamp":"2022-06-20T00:12:56.942069Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorCRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942287Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorValidatingWebhookInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942318Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942330Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorServiceAccountInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942351Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorClusterRoleBindingInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942359Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorServiceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942372Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer installStrategyConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942384Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer namespaceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942395Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer extensionsConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942404Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorClusterRoleInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942415Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorRoleBindingInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942424Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorDeploymentInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942432Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: W0620 00:14:37.603922 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:37.604915Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:37.605022Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"CDI detected, DataVolume integration enabled","pos":"application.go:347","timestamp":"2022-06-20T00:14:37.906174Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: W0620 00:14:37.906243 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:37.906358Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:37.906408Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:37.906449Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorMutatingWebhookInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942441Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:37.906486Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:37.906499Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:37.906531Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:37.906543Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:37.906630Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:37.906643Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:37.906658Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: W0620 00:14:37.908207 1 shared_informer.go:504] resyncPeriod 5m0s is smaller than resyncCheckPeriod 19h15m21.324440719s and the informer has already started. Changing it to 19h15m21.324440719s | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: W0620 00:14:37.908481 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"certificate with common name 'virt-controller.kubevirt.pod.cluster.local' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:14:37.908823Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: I0620 00:14:37.909196 1 leaderelection.go:248] attempting to acquire leader lease kubevirt/virt-controller... | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"action":"listening","component":"virt-controller","interface":"0.0.0.0","level":"info","port":8443,"pos":"application.go:417","service":"http","timestamp":"2022-06-20T00:14:37.909389Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.410498Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:15:20.412673Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:15:20.413832Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.443059Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:15:25.443123Z"} | |
kubevirt/virt-controller-749d8d99d4-wz5m4[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:15:25.443160Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: W0620 00:14:25.367142 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.368489Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.371876Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"CDI detected, DataVolume integration enabled","pos":"application.go:347","timestamp":"2022-06-20T00:14:25.573598Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: W0620 00:14:25.573672 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:25.573762Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:25.573812Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:25.573848Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:25.573862Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:25.573901Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:25.573964Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:25.573976Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:25.574043Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:14:25.574060Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:14:25.574072Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: W0620 00:14:25.575879 1 shared_informer.go:504] resyncPeriod 5m0s is smaller than resyncCheckPeriod 19h15m21.324440719s and the informer has already started. Changing it to 19h15m21.324440719s | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: W0620 00:14:25.576108 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: I0620 00:14:25.576754 1 leaderelection.go:248] attempting to acquire leader lease kubevirt/virt-controller... | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"certificate with common name 'virt-controller.kubevirt.pod.cluster.local' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:14:25.577378Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"action":"listening","component":"virt-controller","interface":"0.0.0.0","level":"info","port":8443,"pos":"application.go:417","service":"http","timestamp":"2022-06-20T00:14:25.577426Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: I0620 00:14:25.582782 1 leaderelection.go:258] successfully acquired lease kubevirt/virt-controller | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584687Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmimInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584724Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmFlavorInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584734Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer kubeVirtNodeInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584745Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmpool","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584754Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer persistentVolumeClaimInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584763Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer controllerRevisionInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584771Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer storageClassInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584781Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer podInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584789Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer dataVolumeInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584798Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer migrationPolicyInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584806Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer kubeVirtPodInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584817Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"SKIPPING informer kubeVirtInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:14:25.584827Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmirsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584837Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmSnapshotInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584846Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmSnapshotContentInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584854Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmRestoreInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584862Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer cdiInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584872Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmClusterFlavorInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584880Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"SKIPPING informer CRDInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:14:25.584888Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer cdiConfigInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584897Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING informer vmiInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:14:25.584905Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"STARTING controllers with following threads : node 3, vmi 10, replicaset 3, vm 3, migration 3, evacuation 3, disruptionBudget 3\n","pos":"application.go:444","timestamp":"2022-06-20T00:14:25.584937Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting vmi collector","pos":"collector.go:83","timestamp":"2022-06-20T00:14:25.584948Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting performance and scale metrics","pos":"register.go:30","timestamp":"2022-06-20T00:14:25.584967Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting evacuation controller.","pos":"evacuation.go:270","timestamp":"2022-06-20T00:14:25.612347Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting disruption budget controller.","pos":"disruptionbudget.go:314","timestamp":"2022-06-20T00:14:25.612402Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting node controller.","pos":"node.go:110","timestamp":"2022-06-20T00:14:25.612432Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting vmi controller.","pos":"vmi.go:238","timestamp":"2022-06-20T00:14:25.612458Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting VirtualMachineInstanceReplicaSet controller.","pos":"replicaset.go:112","timestamp":"2022-06-20T00:14:25.612488Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting pool controller.","pos":"pool.go:424","timestamp":"2022-06-20T00:14:25.612513Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting VirtualMachine controller.","pos":"vm.go:163","timestamp":"2022-06-20T00:14:25.612533Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting migration controller.","pos":"migration.go:170","timestamp":"2022-06-20T00:14:25.618818Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting snapshot controller.","pos":"snapshot_base.go:196","timestamp":"2022-06-20T00:14:25.618863Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorAPIServiceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942449Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorPodDisruptionBudgetInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942457Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer secretsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942466Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer FakeOperatorSCC","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942476Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer FakeOperatorPrometheusRuleInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942488Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942497Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorRoleInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942506Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorDaemonSetInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942514Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer installStrategyJobsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942523Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer operatorPodsInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942532Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer OperatorConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942541Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"STARTING informer FakeOperatorServiceMonitor","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:12:56.942549Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:406","timestamp":"2022-06-20T00:12:57.042893Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: I0620 00:12:57.043000 1 leaderelection.go:248] attempting to acquire leader lease kubevirt/virt-operator... | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting restore controller.","pos":"restore_base.go:93","timestamp":"2022-06-20T00:14:25.618878Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Starting workload update controller.","pos":"workload-updater.go:220","timestamp":"2022-06-20T00:14:25.618896Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:14:25.623859Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:14:25.638731Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:15:06.334928Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:15:06.335028Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.410547Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:15:20.411705Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:15:20.413459Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.448786Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"setting rate limiter to 20 QPS and 30 Burst","pos":"application.go:398","timestamp":"2022-06-20T00:15:25.453274Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"set log verbosity to 2","pos":"application.go:405","timestamp":"2022-06-20T00:15:25.453412Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:16:03.427824Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:16:03.427967Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:16:40.865996Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:16:40.866069Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:17:42.019126Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:17:42.019196Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"error","msg":"Updating api version annotations failed","name":"instance","namespace":"default","pos":"vm.go:233","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:18:31.711701Z","uid":"de0082f5-1e03-49bb-abdc-b46ecf5af7d1"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"re-enqueuing VirtualMachine default/instance","pos":"vm.go:199","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:18:31.711766Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"info","msg":"Starting VM due to runStrategy: Always","name":"instance","namespace":"default","pos":"vm.go:633","timestamp":"2022-06-20T00:18:31.725304Z","uid":"de0082f5-1e03-49bb-abdc-b46ecf5af7d1"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"info","msg":"Started VM by creating the new virtual machine instance instance","name":"instance","namespace":"default","pos":"vm.go:850","timestamp":"2022-06-20T00:18:31.759650Z","uid":"de0082f5-1e03-49bb-abdc-b46ecf5af7d1"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"info","msg":"arguments for container-disk \"volumeboot\": --copy-path /var/run/kubevirt-ephemeral-disks/container-disk-data/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/disk_0","name":"instance","namespace":"default","pos":"container-disk.go:310","timestamp":"2022-06-20T00:18:31.799623Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: E0620 00:18:31.828564 1 util.go:130] Operation cannot be fulfilled on virtualmachines.kubevirt.io "instance": the object has been modified; please apply your changes to the latest version and try again | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:18:37.101611Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:18:37.101695Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:19:08.121293Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:19:08.121621Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:19:43.824493Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:19:43.825128Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:20:35.688166Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:20:35.690942Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:21:11.932132Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:21:11.934976Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:21:51.064673Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:21:51.065242Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:22:31.762303Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:22:31.763809Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:23:36.934325Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:23:36.937621Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"error","msg":"Skipping TSC frequency updates on all nodes","pos":"nodetopologyupdater.go:54","reason":"failed to calculate lowest TSC frequency for nodes: no schedulable node exposes a tsc-frequency","timestamp":"2022-06-20T00:24:18.156999Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"TSC Freqency node update status: 0 updated, 0 skipped, 0 errors","pos":"nodetopologyupdater.go:47","timestamp":"2022-06-20T00:24:18.161363Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: E0620 00:24:36.230249 1 util.go:130] Operation cannot be fulfilled on virtualmachines.kubevirt.io "instance-full": the object has been modified; please apply your changes to the latest version and try again | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"info","msg":"Looking for DataVolume Ref","name":"disk-dv-instance-full","namespace":"kube-public","pos":"vm.go:1428","timestamp":"2022-06-20T00:24:36.277628Z","uid":"55dce5ae-78d6-4f44-9185-15a8fa714053"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"info","msg":"DataVolume created because disk-dv-instance-full was added.","name":"disk-dv-instance-full","namespace":"kube-public","pos":"vm.go:1435","timestamp":"2022-06-20T00:24:36.287044Z","uid":"55dce5ae-78d6-4f44-9185-15a8fa714053"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"error","msg":"Updating the VirtualMachine status failed.","name":"instance-full","namespace":"kube-public","pos":"vm.go:311","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-full\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:36.286962Z","uid":"4d287919-4c7e-4c9a-9097-dbaf6ad8d5c8"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"re-enqueuing VirtualMachine kube-public/instance-full","pos":"vm.go:199","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-full\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:36.287380Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"error","msg":"Updating the VirtualMachine status failed.","name":"instance-full","namespace":"kube-public","pos":"vm.go:311","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-full\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:36.308454Z","uid":"4d287919-4c7e-4c9a-9097-dbaf6ad8d5c8"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"re-enqueuing VirtualMachine kube-public/instance-full","pos":"vm.go:199","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-full\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:36.308599Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"warning","msg":"No VolumeSnapshotClass for standard","pos":"snapshot.go:560","timestamp":"2022-06-20T00:24:36.364047Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"warning","msg":"No VolumeSnapshotClass for standard","pos":"snapshot.go:560","timestamp":"2022-06-20T00:24:36.399190Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"error","msg":"Updating api version annotations failed","name":"instance-almost-default","namespace":"default","pos":"vm.go:233","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-almost-default\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:37.702026Z","uid":"fa282040-dc37-4fc8-8c79-3950cf6f8d99"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"re-enqueuing VirtualMachine default/instance-almost-default","pos":"vm.go:199","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-almost-default\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:37.702093Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: E0620 00:24:37.737463 1 util.go:130] Operation cannot be fulfilled on virtualmachines.kubevirt.io "instance-almost-default": the object has been modified; please apply your changes to the latest version and try again | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: E0620 00:24:39.061859 1 util.go:130] Operation cannot be fulfilled on virtualmachines.kubevirt.io "instance-running-false": the object has been modified; please apply your changes to the latest version and try again | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","kind":"","level":"error","msg":"Updating the VirtualMachine status failed.","name":"instance-running-false","namespace":"default","pos":"vm.go:311","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-running-false\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:39.085289Z","uid":"2bc62361-abb3-4b92-83ae-57bda699541c"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"info","msg":"re-enqueuing VirtualMachine default/instance-running-false","pos":"vm.go:199","reason":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"instance-running-false\": the object has been modified; please apply your changes to the latest version and try again","timestamp":"2022-06-20T00:24:39.085346Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"warning","msg":"No VolumeSnapshotClass for standard","pos":"snapshot.go:560","timestamp":"2022-06-20T00:24:43.493470Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: E0620 00:24:43.519475 1 util.go:130] Operation cannot be fulfilled on virtualmachines.kubevirt.io "instance-full": the object has been modified; please apply your changes to the latest version and try again | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"warning","msg":"No VolumeSnapshotClass for standard","pos":"snapshot.go:560","timestamp":"2022-06-20T00:24:43.519663Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"warning","msg":"No VolumeSnapshotClass for standard","pos":"snapshot.go:560","timestamp":"2022-06-20T00:24:43.533560Z"} | |
kubevirt/virt-controller-749d8d99d4-csgtw[virt-controller]: {"component":"virt-controller","level":"warning","msg":"No VolumeSnapshotClass for standard","pos":"snapshot.go:560","timestamp":"2022-06-20T00:24:47.891113Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '807'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:06.812341Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1233'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:30.071486Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"CA update in configmap kube-system/extension-apiserver-authentication detected. Updating from resource version -1 to 40","pos":"ca-manager.go:96","timestamp":"2022-06-20T00:13:40.071759Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1293'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:40.084064Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:15.287303Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.399842Z"} | |
kubevirt/virt-operator-564f568975-g2bh6[virt-operator]: {"component":"virt-operator","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.450806Z"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:10.075126Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: W0620 00:13:55.593519 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: W0620 00:13:55.594502 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.620892Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer CRDInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.620959Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmiPresetInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.620973Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer limitrangeInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.620986Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmRestoreInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.620995Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer kubeVirtInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.621004Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer extensionsConfigMapInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.621013Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer CRDInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621023Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmiPresetInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621033Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer limitrangeInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621042Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmRestoreInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621051Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer kubeVirtInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621059Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621090Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.621101Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"CDI detected, DataSource integration enabled","pos":"api.go:932","timestamp":"2022-06-20T00:13:55.921928Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer vmRestoreInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.921993Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer dataSourceInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.922011Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmFlavorInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.922025Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer CRDInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.922040Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer vmiPresetInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.922063Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.922076Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer limitrangeInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.922090Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"STARTING informer vmClusterFlavorInformer","pos":"virtinformers.go:305","timestamp":"2022-06-20T00:13:55.922099Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer kubeVirtInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.922117Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"SKIPPING informer extensionsConfigMapInformer","pos":"virtinformers.go:302","timestamp":"2022-06-20T00:13:55.922131Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmiPresetInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922145Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmRestoreInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922155Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer dataSourceInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922172Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmFlavorInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922182Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer CRDInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922195Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922204Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer extensionsKubeVirtCAConfigMapInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922219Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer limitrangeInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922228Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer vmClusterFlavorInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922236Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Waiting for cache sync of informer kubeVirtInformer","pos":"virtinformers.go:317","timestamp":"2022-06-20T00:13:55.922249Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1293'","pos":"configuration.go:320","timestamp":"2022-06-20T00:13:55.922637Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:55.922679Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:55.922699Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:55.922737Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:55.922760Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:55.922773Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:55.922787Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:55.924566Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:55.924677Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:55.924698Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:13:55.924723Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:13:55.924734Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:55.924759Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:13:55.924789Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"certificate with common name 'virt-api.kubevirt.pod.cluster.local' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:13:56.022832Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"certificate with common name 'kubevirt.io:system:client:virt-handler' retrieved.","pos":"cert-manager.go:198","timestamp":"2022-06-20T00:13:56.023052Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"CA update in configmap kube-system/extension-apiserver-authentication detected. Updating from resource version -1 to 40","pos":"ca-manager.go:96","timestamp":"2022-06-20T00:14:10.071493Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: W0620 00:14:10.131558 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.146592Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.402904Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.418742Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.435147Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.457818Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.458833Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.458879Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.459774Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:10.697827Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:11.230787Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:11.566298Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:12.991873Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:13.867018Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:14.949101Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:15.004539Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:15.303822Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1485'","pos":"configuration.go:320","timestamp":"2022-06-20T00:14:15.329876Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:14:15.329916Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:14:15.329928Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:14:15.329959Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:16.167431Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:17.591111Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:17.967495Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:19.890896Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:20.046017Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:20.765937Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:22.191245Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:23.077306Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:23.654562Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:24.493492Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:26.790640Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.594336Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.615801Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.655653Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.659248Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:27.665587Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:29.091141Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:30.044264Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:31.390609Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:33.690863Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:35.997922Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:38.290849Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:40.044077Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:40.590845Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:44.979088Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:45.022766Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:50.043960Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.506914Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.527343Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.535208Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.536314Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.536668Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:14:57.537629Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:00.043409Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:10.049432Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:11.309408Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:15.007782Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:15.035898Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:20.043298Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1805'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:20.402848Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:15:20.405244Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:15:20.405298Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:15:20.405353Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"Updating cluster config from KubeVirt to resource version '1823'","pos":"configuration.go:320","timestamp":"2022-06-20T00:15:25.442294Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for the API to 5 QPS and 10 Burst","pos":"api.go:1002","timestamp":"2022-06-20T00:15:25.442354Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"setting rate limiter for webhooks to 200 QPS and 400 Burst","pos":"api.go:1006","timestamp":"2022-06-20T00:15:25.442369Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"set log verbosity to 2","pos":"api.go:993","timestamp":"2022-06-20T00:15:25.442419Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.537157Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.551931Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.561059Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.565821Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.574258Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:27.796350Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:30.043768Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:30.094387Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:32.402480Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:34.643405Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:36.996115Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:39.293348Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:40.042901Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:45.042747Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:45.059668Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:50.043158Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.552122Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.559637Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.560707Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:15:57.570681Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:00.043678Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:10.043239Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:11.335269Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:15.100046Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:15.123619Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:20.043897Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.508298Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.516806Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.519141Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.527406Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:27.531298Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:30.043912Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:40.043112Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:40.261398Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:42.611076Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:44.860975Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:45.167728Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:45.182545Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:47.163679Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:49.483744Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:50.043606Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:51.811566Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.677594Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.684150Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.698112Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.713434Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.713783Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:16:57.740965Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:00.043029Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:10.043619Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:11.395362Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:15.220049Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:15.242632Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:57583: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:17:15.248495Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:20.064287Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.537289Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.545289Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.559133Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.565402Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.579658Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:27.583316Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:30.047682Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:40.043266Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:45.314637Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:37270: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:17:45.327202Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:50.043452Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.504703Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.528178Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.552660Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.565936Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.570656Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:17:57.579920Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:00.042826Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:10.057997Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:15.370685Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:20.058632Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:22.205600Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:23.616452Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:24.877101Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.526884Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.538823Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.530020Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:27.543792Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:30.047700Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:30.338316Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:34.134740Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:36.593159Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:39.079669Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:40.042487Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:41.791315Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:44.584831Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:45.407398Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:45.436768Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:38676: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:18:45.441915Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:47.095570Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:52593: read tcp 172.17.0.10:8443-\u003e172.17.0.1:52593: read: connection reset by peer\n","pos":"server.go:3160","timestamp":"2022-06-20T00:18:48.787540Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:50.043550Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:50.658089Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.617496Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.639788Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.672278Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.683466Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.685915Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:18:57.694328Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:00.044936Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:05.432757Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:07.632451Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:10.035148Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:10.062892Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:11.476485Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:12.337028Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:14.632804Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:15.463774Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:15.494735Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:16.932157Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:20.072649Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.617072Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.620479Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.620901Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.623054Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.630309Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.630720Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.631099Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.648050Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:27.655835Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:30.043742Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:40.043693Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:45.522312Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:45.539641Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:50.044085Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.592610Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.593444Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.639597Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.653107Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:19:57.679908Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:00.046232Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:10.044347Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:11.507227Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:15.590057Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:15.636718Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:18.241014Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:20.046475Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:20.490616Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:22.791742Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:24.940866Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.394741Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.564766Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.632910Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.644947Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.645401Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:27.645800Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:29.741095Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:30.050581Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:40.064552Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:45.615702Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:45.670751Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:50.044968Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.551642Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.571653Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.576899Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.591465Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.599303Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:20:57.641173Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:00.046164Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:10.044423Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:11.669139Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:15.664885Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:15.700284Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:20.053936Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.503244Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.506311Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.512698Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:27.529293Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:30.047578Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:30.669957Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:32.348577Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:35.271691Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:37.320242Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:39.869918Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:40.044827Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:42.170376Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:45.699051Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:45.731027Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:50.045905Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:57.664705Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:57.794763Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:57.798904Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.045569Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.054265Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.106420Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.114167Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.132177Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.135158Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.138617Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.166683Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.177756Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:21:58.209181Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:00.047042Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:10.044192Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:20.069873Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.599602Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.615386Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.619210Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.631079Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:27.636054Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:30.044539Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:40.044209Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:45.870049Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","level":"info","msg":"http: TLS handshake error from 172.17.0.1:10691: EOF\n","pos":"server.go:3160","timestamp":"2022-06-20T00:22:45.893237Z"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:50.044558Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.514374Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.524855Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.528204Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.530700Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.542737Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:22:57.596736Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:00.043987Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:10.043506Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:15.950920Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:20.075088Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.522136Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.525371Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.548085Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.619702Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.643991Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.648073Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.658991Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:27.815956Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:30.047658Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:40.045877Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:50.043618Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.629836Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.697066Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.700279Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.718852Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:23:57.747770Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:00.049140Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:10.042810Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:20.066361Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.567129Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.569507Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.573423Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.578284Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:27.578959Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:30.046796Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:40.044553Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:50.043173Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.520462Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.526206Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:24:57.619643Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:00.042965Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-x77j5[virt-api]: {"component":"virt-api","contentLength":370,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:10.111809Z","url":"/apis/subresources.kubevirt.io/v1/healthz","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:10.513451Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:10.550516Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:11.634850Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":67779,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:11.787220Z","url":"/openapi/v2","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:12.794439Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:12.843802Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:15.093883Z","url":"/apis/subresources.kubevirt.io/v1?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:15.144515Z","url":"/apis/subresources.kubevirt.io/v1alpha3?timeout=32s","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:16.125313Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:16.125531Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:16.141252Z","url":"/apis/subresources.kubevirt.io/v1alpha3","username":"-"} | |
kubevirt/virt-api-77df5c4f87-fj857[virt-api]: {"component":"virt-api","contentLength":2374,"level":"info","method":"GET","pos":"filter.go:46","proto":"HTTP/1.1","remoteAddress":"172.17.0.1","statusCode":200,"timestamp":"2022-06-20T00:25:16.141274Z","url":"/apis/subresources.kubevirt.io/v1","username":"-"} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
default/molecule-knmpm[molecule]: py38-ansible_4 create: /opt/molecule_kubevirt/.tox/py38-ansible_4 | |
default/molecule-knmpm[molecule]: py38-ansible_4 installdeps: ansible>=4.0,<5.0, selinux | |
default/molecule-knmpm[molecule]: py38-ansible_4 develop-inst: /opt/molecule_kubevirt | |
default/molecule-knmpm[molecule]: py38-ansible_4 installed: ansi2html==1.7.0,ansible==4.10.0,ansible-compat==2.1.0,ansible-core==2.11.12,arrow==1.2.2,attrs==21.4.0,binaryornot==0.4.4,cachetools==5.2.0,Cerberus==1.3.2,certifi==2022.6.15,cffi==1.15.0,cfgv==3.3.1,chardet==4.0.0,charset-normalizer==2.0.12,click==8.1.3,click-help-colors==0.9.1,commonmark==0.9.1,cookiecutter==2.1.1,coverage==6.4.1,cryptography==37.0.2,distlib==0.3.4,distro==1.7.0,enrich==1.2.7,execnet==1.9.0,filelock==3.7.1,google-auth==2.8.0,identify==2.5.1,idna==3.3,importlib-resources==5.8.0,iniconfig==1.1.1,Jinja2==3.1.2,jinja2-time==0.2.0,jsonschema==4.6.0,kubernetes==11.0.0,MarkupSafe==2.1.1,molecule==4.0.0,-e git+https://github.com/jseguillon/molecule-kubevirt@08ae5a4f4bb563582d5b6f95586acacc7e7dc7ee#egg=molecule_kubevirt,more-itertools==8.13.0,nodeenv==1.6.0,oauthlib==3.2.0,openshift==0.11.2,packaging==21.3,pexpect==4.8.0,platformdirs==2.5.2,pluggy==1.0.0,pre-commit==2.19.0,ptyprocess==0.7.0,py==1.11.0,pyasn1==0.4.8,pyasn1-modules==0.2.8,pycparser==2.21,Pygments==2.12.0,pyparsing==3.0.9,pyrsistent==0.18.1,pytest==7.1.2,pytest-cov==3.0.0,pytest-forked==1.4.0,pytest-helpers-namespace==2021.12.29,pytest-html==3.1.1,pytest-metadata==2.0.1,pytest-mock==3.7.0,pytest-plus==0.2,pytest-testinfra==6.8.0,pytest-xdist==2.5.0,python-dateutil==2.8.2,python-slugify==6.1.2,python-string-utils==1.0.0,PyYAML==6.0,requests==2.28.0,requests-oauthlib==1.3.1,resolvelib==0.5.4,rich==12.4.4,rsa==4.8,ruamel.yaml==0.17.21,ruamel.yaml.clib==0.2.6,selinux==0.2.1,six==1.16.0,subprocess-tee==0.3.5,text-unidecode==1.3,toml==0.10.2,tomli==2.0.1,typing_extensions==4.2.0,urllib3==1.26.9,virtualenv==20.14.1,websocket-client==1.3.2,zipp==3.8.0 | |
default/molecule-knmpm[molecule]: py38-ansible_4 run-test-pre: PYTHONHASHSEED='3743274778' | |
default/molecule-knmpm[molecule]: py38-ansible_4 run-test: commands[0] | pip check | |
default/molecule-knmpm[molecule]: No broken requirements found. | |
default/molecule-knmpm[molecule]: py38-ansible_4 run-test: commands[1] | python -m pytest -p no:cov --collect-only | |
default/molecule-knmpm[molecule]: [1m============================= test session starts ==============================[0m | |
default/molecule-knmpm[molecule]: platform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0 | |
default/molecule-knmpm[molecule]: cachedir: .tox/py38-ansible_4/.pytest_cache | |
default/molecule-knmpm[molecule]: rootdir: /opt/molecule_kubevirt, configfile: setup.cfg | |
default/molecule-knmpm[molecule]: plugins: helpers-namespace-2021.12.29, plus-0.2, xdist-2.5.0, mock-3.7.0, forked-1.4.0, metadata-2.0.1, html-3.1.1, testinfra-6.8.0 | |
default/molecule-knmpm[molecule]: [DEPRECATION WARNING]: ANSIBLE_CALLABLE_WHITELIST option, normalizing names to | |
default/molecule-knmpm[molecule]: new standard, use ANSIBLE_CALLABLE_ENABLED instead. This feature will be | |
default/molecule-knmpm[molecule]: removed from ansible-core in version 2.15. Deprecation warnings can be disabled | |
default/molecule-knmpm[molecule]: by setting deprecation_warnings=False in ansible.cfg. | |
default/molecule-knmpm[molecule]: collected 5 items | |
default/molecule-knmpm[molecule]: | |
default/molecule-knmpm[molecule]: <Package test> | |
default/molecule-knmpm[molecule]: <Module test_driver.py> | |
default/molecule-knmpm[molecule]: <Function test_driver_is_detected> | |
default/molecule-knmpm[molecule]: <Module test_init.py> | |
default/molecule-knmpm[molecule]: <Function test_command_init_and_test_scenario> | |
default/molecule-knmpm[molecule]: <Module test_scenario_tests.py> | |
default/molecule-knmpm[molecule]: <Class TestClass> | |
default/molecule-knmpm[molecule]: <Function test_instance_spec[kube-public-instance-full-instance-full-notmolecule]> | |
default/molecule-knmpm[molecule]: <Function test_instance_spec[default-instance-almost-default-instance-almost-default-molecule]> | |
default/molecule-knmpm[molecule]: <Function test_instance_spec[default-instance-running-false--molecule]> | |
default/molecule-knmpm[molecule]: | |
default/molecule-knmpm[molecule]: [32m========================== [32m5 tests collected[0m[32m in 1.42s[0m[32m ==========================[0m | |
default/molecule-knmpm[molecule]: py38-ansible_4 run-test: commands[2] | python -m pytest -l | |
default/molecule-knmpm[molecule]: [1m============================= test session starts ==============================[0m | |
default/molecule-knmpm[molecule]: platform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0 | |
default/molecule-knmpm[molecule]: cachedir: .tox/py38-ansible_4/.pytest_cache | |
default/molecule-knmpm[molecule]: rootdir: /opt/molecule_kubevirt, configfile: setup.cfg | |
default/molecule-knmpm[molecule]: plugins: helpers-namespace-2021.12.29, plus-0.2, xdist-2.5.0, mock-3.7.0, forked-1.4.0, metadata-2.0.1, html-3.1.1, testinfra-6.8.0, cov-3.0.0 | |
default/molecule-knmpm[molecule]: [DEPRECATION WARNING]: ANSIBLE_CALLABLE_WHITELIST option, normalizing names to | |
default/molecule-knmpm[molecule]: new standard, use ANSIBLE_CALLABLE_ENABLED instead. This feature will be | |
default/molecule-knmpm[molecule]: removed from ansible-core in version 2.15. Deprecation warnings can be disabled | |
default/molecule-knmpm[molecule]: by setting deprecation_warnings=False in ansible.cfg. | |
default/molecule-knmpm[molecule]: collected 5 items | |
default/molecule-knmpm[molecule]: | |
default/molecule-knmpm[molecule]: molecule_kubevirt/test/test_driver.py [32m.[0m[32m [ 20%][0m | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:76","timestamp":"2022-06-20T00:18:45.911855Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:79","timestamp":"2022-06-20T00:18:45.915507Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu:///system","pos":"libvirt.go:495","timestamp":"2022-06-20T00:18:45.918872Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon failed: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')","pos":"libvirt.go:503","timestamp":"2022-06-20T00:18:45.924230Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"libvirt version: 7.6.0, package: 6.el8s (CBS \[email protected]\u003e, 2021-10-29-15:04:36, )","subcomponent":"libvirt","thread":"39","timestamp":"2022-06-20T00:18:45.977000Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"hostname: instance","subcomponent":"libvirt","thread":"39","timestamp":"2022-06-20T00:18:45.977000Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"error","msg":"internal error: Child process (dmidecode -q -t 0,1,2,3,4,11,17) unexpected exit status 1: /dev/mem: No such file or directory","pos":"virCommandWait:2749","subcomponent":"libvirt","thread":"39","timestamp":"2022-06-20T00:18:45.977000Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Connected to libvirt daemon","pos":"libvirt.go:511","timestamp":"2022-06-20T00:18:46.427934Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Registered libvirt event notify callback","pos":"client.go:509","timestamp":"2022-06-20T00:18:46.438211Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Marked as ready","pos":"virt-launcher.go:80","timestamp":"2022-06-20T00:18:46.440886Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Hardware emulation device '/dev/kvm' not present. Using software emulation.","pos":"converter.go:1194","timestamp":"2022-06-20T00:18:47.637971Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"In-kernel virtio-net device emulation '/dev/vhost-net' not present. Falling back to QEMU userland emulation.","pos":"converter.go:1208","timestamp":"2022-06-20T00:18:47.638021Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Executing PreStartHook on VMI pod environment","name":"instance","namespace":"default","pos":"manager.go:513","timestamp":"2022-06-20T00:18:47.641028Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"error","msg":"could not read secret data from source: /var/run/kubevirt-private/secret/cloudinit/networkdata","pos":"cloud-init.go:290","reason":"open /var/run/kubevirt-private/secret/cloudinit/networkdata: no such file or directory","timestamp":"2022-06-20T00:18:47.641472Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"error","msg":"could not read secret data from source: /var/run/kubevirt-private/secret/cloudinit/networkData","pos":"cloud-init.go:290","reason":"open /var/run/kubevirt-private/secret/cloudinit/networkData: no such file or directory","timestamp":"2022-06-20T00:18:47.641525Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Starting PreCloudInitIso hook","name":"instance","namespace":"default","pos":"manager.go:534","timestamp":"2022-06-20T00:18:47.641543Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Found nameservers in /etc/resolv.conf: \n`\u0000\n","pos":"network.go:286","timestamp":"2022-06-20T00:18:47.644877Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Found search domains in /etc/resolv.conf: default.svc.cluster.local svc.cluster.local cluster.local paxcb4zidlwe3mfrpko3itqnmd.bx.internal.cloudapp.net","pos":"network.go:287","timestamp":"2022-06-20T00:18:47.645223Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Starting SingleClientDHCPServer","pos":"server.go:64","timestamp":"2022-06-20T00:18:47.645427Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Driver cache mode for /var/run/kubevirt-ephemeral-disks/disk-data/boot/disk.qcow2 set to none","pos":"converter.go:456","timestamp":"2022-06-20T00:18:47.658740Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Driver cache mode for /var/run/kubevirt-ephemeral-disks/cloud-init-data/default/instance/noCloud.iso set to none","pos":"converter.go:456","timestamp":"2022-06-20T00:18:47.665000Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Domain XML generated. Base64 dump PGRvbWFpbiB0eXBlPSJxZW11IiB4bWxuczpxZW11PSJodHRwOi8vbGlidmlydC5vcmcvc2NoZW1hcy9kb21haW4vcWVtdS8xLjAiPgoJPG5hbWU+ZGVmYXVsdF9pbnN0YW5jZTwvbmFtZT4KCTxtZW1vcnkgdW5pdD0iYiI+MjE0NzQ4MzY0ODwvbWVtb3J5PgoJPG9zPgoJCTx0eXBlIGFyY2g9Ing4Nl82NCIgbWFjaGluZT0icTM1Ij5odm08L3R5cGU+CgkJPHNtYmlvcyBtb2RlPSJzeXNpbmZvIj48L3NtYmlvcz4KCTwvb3M+Cgk8c3lzaW5mbyB0eXBlPSJzbWJpb3MiPgoJCTxzeXN0ZW0+CgkJCTxlbnRyeSBuYW1lPSJ1dWlkIj4wOTI4MmM0MC1kZDE2LTU5NTYtOTIxMi1kZWZjNTFjODM3NGY8L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0ibWFudWZhY3R1cmVyIj5LdWJlVmlydDwvZW50cnk+CgkJCTxlbnRyeSBuYW1lPSJmYW1pbHkiPkt1YmVWaXJ0PC9lbnRyeT4KCQkJPGVudHJ5IG5hbWU9InByb2R1Y3QiPk5vbmU8L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0ic2t1Ij48L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0idmVyc2lvbiI+PC9lbnRyeT4KCQk8L3N5c3RlbT4KCQk8Ymlvcz48L2Jpb3M+CgkJPGJhc2VCb2FyZD48L2Jhc2VCb2FyZD4KCQk8Y2hhc3Npcz48L2NoYXNzaXM+Cgk8L3N5c2luZm8+Cgk8ZGV2aWNlcz4KCQk8aW50ZXJmYWNlIHR5cGU9ImV0aGVybmV0Ij4KCQkJPHNvdXJjZT48L3NvdXJjZT4KCQkJPHRhcmdldCBkZXY9InRhcDAiIG1hbmFnZWQ9Im5vIj48L3RhcmdldD4KCQkJPG1vZGVsIHR5cGU9InZpcnRpby1ub24tdHJhbnNpdGlvbmFsIj48L21vZGVsPgoJCQk8bWFjIGFkZHJlc3M9IjAyOjQyOmFjOjExOjAwOjEwIj48L21hYz4KCQkJPG10dSBzaXplPSIxNTAwIj48L210dT4KCQkJPGFsaWFzIG5hbWU9InVhLWRlZmF1bHQiPjwvYWxpYXM+CgkJCTxyb20gZW5hYmxlZD0ibm8iPjwvcm9tPgoJCTwvaW50ZXJmYWNlPgoJCTxjaGFubmVsIHR5cGU9InVuaXgiPgoJCQk8dGFyZ2V0IG5hbWU9Im9yZy5xZW11Lmd1ZXN0X2FnZW50LjAiIHR5cGU9InZpcnRpbyI+PC90YXJnZXQ+CgkJPC9jaGFubmVsPgoJCTxjb250cm9sbGVyIHR5cGU9InVzYiIgaW5kZXg9IjAiIG1vZGVsPSJub25lIj48L2NvbnRyb2xsZXI+CgkJPGNvbnRyb2xsZXIgdHlwZT0ic2NzaSIgaW5kZXg9IjAiIG1vZGVsPSJ2aXJ0aW8tbm9uLXRyYW5zaXRpb25hbCI+PC9jb250cm9sbGVyPgoJCTxjb250cm9sbGVyIHR5cGU9InZpcnRpby1zZXJpYWwiIGluZGV4PSIwIiBtb2RlbD0idmlydGlvLW5vbi10cmFuc2l0aW9uYWwiPjwvY29udHJvbGxlcj4KCQk8dmlkZW8+CgkJCTxtb2RlbCB0eXBlPSJ2Z2EiIGhlYWRzPSIxIiB2cmFtPSIxNjM4NCI+PC9tb2RlbD4KCQk8L3ZpZGVvPgoJCTxncmFwaGljcyB0eXBlPSJ2bmMiPgoJCQk8bGlzdGVuIHR5cGU9InNvY2tldCIgc29ja2V0PSIvdmFyL3J1bi9rdWJldmlydC1wcml2YXRlL2VhN2U3N2Y1LWM2NWUtNGEwNS04NTM3LWJiNWQxMDhiYjViMC92aXJ0LXZuYyI+PC9saXN0ZW4+CgkJPC9ncmFwaGljcz4KCQk8bWVtYmFsbG9vbiBtb2RlbD0idmlydGlvLW5vbi10cmFuc2l0aW9uYWwiPgoJCQk8c3RhdHMgcGVyaW9kPSIxMCI+PC9zdGF0cz4KCQk8L21lbWJhbGxvb24+CgkJPGRpc2sgZGV2aWNlPSJkaXNrIiB0eXBlPSJmaWxlIiBtb2RlbD0idmlydGlvLW5vbi10cmFuc2l0aW9uYWwiPgoJCQk8c291cmNlIGZpbGU9Ii92YXIvcnVuL2t1YmV2aXJ0LWVwaGVtZXJhbC1kaXNrcy9kaXNrLWRhdGEvYm9vdC9kaXNrLnFjb3cyIj48L3NvdXJjZT4KCQkJPHRhcmdldCBidXM9InZpcnRpbyIgZGV2PSJ2ZGEiPjwvdGFyZ2V0PgoJCQk8ZHJpdmVyIGNhY2hlPSJub25lIiBlcnJvcl9wb2xpY3k9InN0b3AiIG5hbWU9InFlbXUiIHR5cGU9InFjb3cyIiBkaXNjYXJkPSJ1bm1hcCI+PC9kcml2ZXI+CgkJCTxhbGlhcyBuYW1lPSJ1YS1ib290Ij48L2FsaWFzPgoJCQk8YmFja2luZ1N0b3JlIHR5cGU9ImZpbGUiPgoJCQkJPGZvcm1hdCB0eXBlPSJxY293MiI+PC9mb3JtYXQ+CgkJCQk8c291cmNlIGZpbGU9Ii92YXIvcnVuL2t1YmV2aXJ0L2NvbnRhaW5lci1kaXNrcy9kaXNrXzAuaW1nIj48L3NvdXJjZT4KCQkJPC9iYWNraW5nU3RvcmU+CgkJPC9kaXNrPgoJCTxkaXNrIGRldmljZT0iZGlzayIgdHlwZT0iZmlsZSIgbW9kZWw9InZpcnRpby1ub24tdHJhbnNpdGlvbmFsIj4KCQkJPHNvdXJjZSBmaWxlPSIvdmFyL3J1bi9rdWJldmlydC1lcGhlbWVyYWwtZGlza3MvY2xvdWQtaW5pdC1kYXRhL2RlZmF1bHQvaW5zdGFuY2Uvbm9DbG91ZC5pc28iPjwvc291cmNlPgoJCQk8dGFyZ2V0IGJ1cz0idmlydGlvIiBkZXY9InZkYiI+PC90YXJnZXQ+CgkJCTxkcml2ZXIgY2FjaGU9Im5vbmUiIGVycm9yX3BvbGljeT0ic3RvcCIgbmFtZT0icWVtdSIgdHlwZT0icmF3IiBkaXNjYXJkPSJ1bm1hcCI+PC9kcml2ZXI+CgkJCTxhbGlhcyBuYW1lPSJ1YS1jbG91ZGluaXQiPjwvYWxpYXM+CgkJPC9kaXNrPgoJCTxzZXJpYWwgdHlwZT0idW5peCI+CgkJCTx0YXJnZXQgcG9ydD0iMCI+PC90YXJnZXQ+CgkJCTxzb3VyY2UgbW9kZT0iYmluZCIgcGF0aD0iL3Zhci9ydW4va3ViZXZpcnQtcHJpdmF0ZS9lYTdlNzdmNS1jNjVlLTRhMDUtODUzNy1iYjVkMTA4YmI1YjAvdmlydC1zZXJpYWwwIj48L3NvdXJjZT4KCQk8L3NlcmlhbD4KCQk8Y29uc29sZSB0eXBlPSJwdHkiPgoJCQk8dGFyZ2V0IHR5cGU9InNlcmlhbCIgcG9ydD0iMCI+PC90YXJnZXQ+CgkJPC9jb25zb2xlPgoJPC9kZXZpY2VzPgoJPG1ldGFkYXRhPgoJCTxrdWJldmlydCB4bWxucz0iaHR0cDovL2t1YmV2aXJ0LmlvIj4KCQkJPHVpZD5lYTdlNzdmNS1jNjVlLTRhMDUtODUzNy1iYjVkMTA4YmI1YjA8L3VpZD4KCQkJPGdyYWNlcGVyaW9kPgoJCQkJPGRlbGV0aW9uR3JhY2VQZXJpb2RTZWNvbmRzPjA8L2RlbGV0aW9uR3JhY2VQZXJpb2RTZWNvbmRzPgoJCQk8L2dyYWNlcGVyaW9kPgoJCTwva3ViZXZpcnQ+Cgk8L21ldGFkYXRhPgoJPGZlYXR1cmVzPgoJCTxhY3BpPjwvYWNwaT4KCTwvZmVhdHVyZXM+Cgk8Y3B1IG1vZGU9Imhvc3QtbW9kZWwiPgoJCTx0b3BvbG9neSBzb2NrZXRzPSIxIiBjb3Jlcz0iMSIgdGhyZWFkcz0iMSI+PC90b3BvbG9neT4KCTwvY3B1PgoJPHZjcHUgcGxhY2VtZW50PSJzdGF0aWMiPjE8L3ZjcHU+Cgk8aW90aHJlYWRzPjE8L2lvdGhyZWFkcz4KPC9kb21haW4+","name":"instance","namespace":"default","pos":"libvirt_helper.go:129","timestamp":"2022-06-20T00:18:47.670433Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Reaped pid 56 with status 0","pos":"virt-launcher.go:554","timestamp":"2022-06-20T00:18:47.741775Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Reaped pid 58 with status 9","pos":"virt-launcher.go:554","timestamp":"2022-06-20T00:18:47.897997Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 0 with reason 0 received","pos":"client.go:435","timestamp":"2022-06-20T00:18:47.924552Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Domain defined.","name":"instance","namespace":"default","pos":"manager.go:860","timestamp":"2022-06-20T00:18:47.924394Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Shutoff(5):Unknown(0)","pos":"client.go:288","timestamp":"2022-06-20T00:18:47.933709Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Successfully connected to domain notify socket at /var/run/kubevirt/domain-notify-pipe.sock","pos":"client.go:167","timestamp":"2022-06-20T00:18:47.940508Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Monitoring loop: rate 1s start timeout 5m1s","pos":"monitor.go:178","timestamp":"2022-06-20T00:18:47.963545Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:18:47.969088Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"generated nocloud iso file /var/run/kubevirt-ephemeral-disks/cloud-init-data/default/instance/noCloud.iso","pos":"cloud-init.go:655","timestamp":"2022-06-20T00:18:48.079337Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"error","msg":"At least one cgroup controller is required: No such device or address","pos":"virCgroupDetectControllers:455","subcomponent":"libvirt","thread":"25","timestamp":"2022-06-20T00:18:48.109000Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"GuestAgentLifecycle event state 2 with reason 1 received","pos":"client.go:492","timestamp":"2022-06-20T00:18:48.211358Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Paused(3):StartingUp(11)","pos":"client.go:288","timestamp":"2022-06-20T00:18:48.215524Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:18:48.226275Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 4 with reason 0 received","pos":"client.go:435","timestamp":"2022-06-20T00:18:48.681143Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 2 with reason 0 received","pos":"client.go:435","timestamp":"2022-06-20T00:18:48.685453Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Domain started.","name":"instance","namespace":"default","pos":"manager.go:888","timestamp":"2022-06-20T00:18:48.688573Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Synced vmi","name":"instance","namespace":"default","pos":"server.go:190","timestamp":"2022-06-20T00:18:48.693312Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Running(1):Unknown(1)","pos":"client.go:288","timestamp":"2022-06-20T00:18:48.696253Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:18:48.709012Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Running(1):Unknown(1)","pos":"client.go:288","timestamp":"2022-06-20T00:18:48.712722Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:18:48.715574Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Hardware emulation device '/dev/kvm' not present. Using software emulation.","pos":"converter.go:1194","timestamp":"2022-06-20T00:18:48.737087Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"In-kernel virtio-net device emulation '/dev/vhost-net' not present. Falling back to QEMU userland emulation.","pos":"converter.go:1208","timestamp":"2022-06-20T00:18:48.737236Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Synced vmi","name":"instance","namespace":"default","pos":"server.go:190","timestamp":"2022-06-20T00:18:48.739512Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Hardware emulation device '/dev/kvm' not present. Using software emulation.","pos":"converter.go:1194","timestamp":"2022-06-20T00:18:48.846229Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"In-kernel virtio-net device emulation '/dev/vhost-net' not present. Falling back to QEMU userland emulation.","pos":"converter.go:1208","timestamp":"2022-06-20T00:18:48.846909Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Synced vmi","name":"instance","namespace":"default","pos":"server.go:190","timestamp":"2022-06-20T00:18:48.851173Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Hardware emulation device '/dev/kvm' not present. Using software emulation.","pos":"converter.go:1194","timestamp":"2022-06-20T00:18:48.880962Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"In-kernel virtio-net device emulation '/dev/vhost-net' not present. Falling back to QEMU userland emulation.","pos":"converter.go:1208","timestamp":"2022-06-20T00:18:48.883678Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Synced vmi","name":"instance","namespace":"default","pos":"server.go:190","timestamp":"2022-06-20T00:18:48.888717Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"2022-06-20 00:18:48.103+0000: starting up libvirt version: 7.6.0, package: 6.el8s (CBS \[email protected]\u003e, 2021-10-29-15:04:36, ), qemu version: 6.0.0qemu-kvm-6.0.0-33.el8s, kernel: 5.13.0-1029-azure, hostname: instance","subcomponent":"qemu","timestamp":"2022-06-20T00:18:48.930165Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"LC_ALL=C \\PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \\HOME=/var/lib/libvirt/qemu/domain-1-default_instance \\XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-default_instance/.local/share \\XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-default_instance/.cache \\XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-default_instance/.config \\/usr/libexec/qemu-kvm \\-name guest=default_instance,debug-threads=on \\-S \\-object '{\"qom-type\":\"secret\",\"id\":\"masterKey0\",\"format\":\"raw\",\"file\":\"/var/lib/libvirt/qemu/domain-1-default_instance/master-key.aes\"}' \\-machine pc-q35-rhel8.5.0,accel=tcg,usb=off,dump-guest-core=off,memory-backend=pc.ram \\-cpu EPYC,acpi=on,ss=on,monitor=on,hypervisor=on,erms=on,mpx=on,pcommit=on,clwb=on,pku=on,la57=on,3dnowext=on,3dnow=on,npt=on,vme=off,fma=off,avx=off,f16c=off,avx2=off,rdseed=off,sha-ni=off,xsavec=off,fxsr-opt=off,misalignsse=off,3dnowprefetch=off,osvw=off,topoext=off,nrip-save=off \\-m 2048 \\-object '{\"qom-type\":\"memory-backend-ram\",\"id\":\"pc.ram\",\"size\":2147483648}' \\-overcommit mem-lock=off \\-smp 1,sockets=1,dies=1,cores=1,threads=1 \\-object '{\"qom-type\":\"iothread\",\"id\":\"iothread1\"}' \\-uuid 09282c40-dd16-5956-9212-defc51c8374f \\-smbios type=1,manufacturer=KubeVirt,product=None,uuid=09282c40-dd16-5956-9212-defc51c8374f,family=KubeVirt \\-no-user-config \\-nodefaults \\-chardev socket,id=charmonitor,fd=19,server=on,wait=off \\-mon chardev=charmonitor,id=monitor,mode=control \\-rtc base=utc \\-no-shutdown \\-boot strict=on \\-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \\-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \\-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \\-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \\-device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \\-device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \\-device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \\-device virtio-scsi-pci-non-transitional,id=scsi0,bus=pci.2,addr=0x0 \\-device virtio-serial-pci-non-transitional,id=virtio-serial0,bus=pci.3,addr=0x0 \\-blockdev '{\"driver\":\"file\",\"filename\":\"/var/run/kubevirt/container-disks/disk_0.img\",\"node-name\":\"libvirt-3-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}' \\-blockdev '{\"node-name\":\"libvirt-3-format\",\"read-only\":true,\"discard\":\"unmap\",\"cache\":{\"direct\":true,\"no-flush\":false},\"driver\":\"qcow2\",\"file\":\"libvirt-3-storage\"}' \\-blockdev '{\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-ephemeral-disks/disk-data/boot/disk.qcow2\",\"node-name\":\"libvirt-2-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}' \\-blockdev '{\"node-name\":\"libvirt-2-format\",\"read-only\":false,\"discard\":\"unmap\",\"cache\":{\"direct\":true,\"no-flush\":false},\"driver\":\"qcow2\",\"file\":\"libvirt-2-storage\",\"backing\":\"libvirt-3-format\"}' \\-device virtio-blk-pci-non-transitional,bus=pci.4,addr=0x0,drive=libvirt-2-format,id=ua-boot,bootindex=1,write-cache=on,werror=stop,rerror=stop \\-blockdev '{\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-ephemeral-disks/cloud-init-data/default/instance/noCloud.iso\",\"node-name\":\"libvirt-1-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}' \\-blockdev '{\"node-name\":\"libvirt-1-format\",\"read-only\":false,\"discard\":\"unmap\",\"cache\":{\"direct\":true,\"no-flush\":false},\"driver\":\"raw\",\"file\":\"libvirt-1-storage\"}' \\-device virtio-blk-pci-non-transitional,bus=pci.5,addr=0x0,drive=libvirt-1-format,id=ua-cloudinit,write-cache=on,werror=stop,rerror=stop \\-netdev tap,fd=21,id=hostua-default \\-device virtio-net-pci-non-transitional,host_mtu=1500,netdev=hostua-default,id=ua-default,mac=02:42:ac:11:00:10,bus=pci.1,addr=0x0,romfile= \\-chardev socket,id=charserial0,fd=22,server=on,wait=off \\-device isa-serial,chardev=charserial0,id=serial0 \\-chardev socket,id=charchannel0,fd=23,server=on,wait=off \\-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \\-audiodev id=audio1,driver=none \\-vnc vnc=unix:/var/run/kubevirt-private/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/virt-vnc,audiodev=audio1 \\-device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \\-device virtio-balloon-pci-non-transitional,id=balloon0,bus=pci.6,addr=0x0 \\-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \\-msg timestamp=on","subcomponent":"qemu","timestamp":"2022-06-20T00:18:48.933685Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Found PID for 09282c40-dd16-5956-9212-defc51c8374f: 68","pos":"monitor.go:139","timestamp":"2022-06-20T00:18:48.971076Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Hardware emulation device '/dev/kvm' not present. Using software emulation.","pos":"converter.go:1194","timestamp":"2022-06-20T00:19:46.997307Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"In-kernel virtio-net device emulation '/dev/vhost-net' not present. Falling back to QEMU userland emulation.","pos":"converter.go:1208","timestamp":"2022-06-20T00:19:46.997386Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Synced vmi","name":"instance","namespace":"default","pos":"server.go:190","timestamp":"2022-06-20T00:19:47.001274Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Received signal terminated","pos":"virt-launcher.go:495","timestamp":"2022-06-20T00:24:17.885088Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Reaped pid 68 with status 0","pos":"virt-launcher.go:554","timestamp":"2022-06-20T00:24:18.012709Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 6 with reason 2 received","pos":"client.go:435","timestamp":"2022-06-20T00:24:18.029745Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Domain stopped.","name":"instance","namespace":"default","pos":"manager.go:1443","timestamp":"2022-06-20T00:24:18.035843Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Signaled vmi kill","name":"instance","namespace":"default","pos":"server.go:293","timestamp":"2022-06-20T00:24:18.036378Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 5 with reason 1 received","pos":"client.go:435","timestamp":"2022-06-20T00:24:18.046375Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"VirtualMachineInstance","level":"info","msg":"Domain XML generated. Base64 dump <domain type="qemu">
	<name>default_instance</name>
	<uuid>09282c40-dd16-5956-9212-defc51c8374f</uuid>
	<memory unit="KiB">2097152</memory>
	<os>
		<type arch="x86_64" machine="pc-q35-rhel8.5.0">hvm</type>
		<smbios mode="sysinfo"></smbios>
		<boot dev="hd"></boot>
	</os>
	<sysinfo type="smbios">
		<system>
			<entry name="manufacturer">KubeVirt</entry>
			<entry name="product">None</entry>
			<entry name="uuid">09282c40-dd16-5956-9212-defc51c8374f</entry>
			<entry name="family">KubeVirt</entry>
		</system>
		<bios></bios>
		<baseBoard></baseBoard>
		<chassis></chassis>
	</sysinfo>
	<devices>
		<emulator>/usr/libexec/qemu-kvm</emulator>
		<interface type="ethernet">
			<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"></address>
			<source></source>
			<target dev="tap0" managed="no"></target>
			<model type="virtio-non-transitional"></model>
			<mac address="02:42:ac:11:00:10"></mac>
			<mtu size="1500"></mtu>
			<alias name="ua-default"></alias>
			<rom enabled="no"></rom>
		</interface>
		<channel type="unix">
			<source mode="bind" path="/var/lib/libvirt/qemu/channel/target/domain-1-default_instance/org.qemu.guest_agent.0"></source>
			<target name="org.qemu.guest_agent.0" type="virtio" state="disconnected"></target>
		</channel>
		<controller type="usb" index="0" model="none">
			<alias name="usb"></alias>
		</controller>
		<controller type="scsi" index="0" model="virtio-non-transitional">
			<alias name="scsi0"></alias>
			<address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"></address>
		</controller>
		<controller type="virtio-serial" index="0" model="virtio-non-transitional">
			<alias name="virtio-serial0"></alias>
			<address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"></address>
		</controller>
		<controller type="sata" index="0">
			<alias name="ide"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"></address>
		</controller>
		<controller type="pci" index="0" model="pcie-root">
			<alias name="pcie.0"></alias>
		</controller>
		<controller type="pci" index="1" model="pcie-root-port">
			<alias name="pci.1"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0"></address>
		</controller>
		<controller type="pci" index="2" model="pcie-root-port">
			<alias name="pci.2"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"></address>
		</controller>
		<controller type="pci" index="3" model="pcie-root-port">
			<alias name="pci.3"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"></address>
		</controller>
		<controller type="pci" index="4" model="pcie-root-port">
			<alias name="pci.4"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"></address>
		</controller>
		<controller type="pci" index="5" model="pcie-root-port">
			<alias name="pci.5"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"></address>
		</controller>
		<controller type="pci" index="6" model="pcie-root-port">
			<alias name="pci.6"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"></address>
		</controller>
		<controller type="pci" index="7" model="pcie-root-port">
			<alias name="pci.7"></alias>
			<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"></address>
		</controller>
		<video>
			<model type="vga" heads="1" vram="16384"></model>
		</video>
		<graphics type="vnc">
			<listen type="socket" socket="/var/run/kubevirt-private/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/virt-vnc"></listen>
		</graphics>
		<memballoon model="virtio-non-transitional">
			<stats period="10"></stats>
			<address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"></address>
		</memballoon>
		<disk device="disk" type="file" model="virtio-non-transitional">
			<source file="/var/run/kubevirt-ephemeral-disks/disk-data/boot/disk.qcow2"></source>
			<target bus="virtio" dev="vda"></target>
			<driver cache="none" error_policy="stop" name="qemu" type="qcow2" discard="unmap"></driver>
			<alias name="ua-boot"></alias>
			<backingStore type="file">
				<format type="qcow2"></format>
				<source file="/var/run/kubevirt/container-disks/disk_0.img"></source>
			</backingStore>
			<address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"></address>
		</disk>
		<disk device="disk" type="file" model="virtio-non-transitional">
			<source file="/var/run/kubevirt-ephemeral-disks/cloud-init-data/default/instance/noCloud.iso"></source>
			<target bus="virtio" dev="vdb"></target>
			<driver cache="none" error_policy="stop" name="qemu" type="raw" discard="unmap"></driver>
			<alias name="ua-cloudinit"></alias>
			<backingStore></backingStore>
			<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"></address>
		</disk>
		<input type="mouse" bus="ps2">
			<alias name="input0"></alias>
		</input>
		<input type="keyboard" bus="ps2">
			<alias name="input1"></alias>
		</input>
		<serial type="unix">
			<target port="0"></target>
			<source mode="bind" path="/var/run/kubevirt-private/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/virt-serial0"></source>
			<alias name="serial0"></alias>
		</serial>
		<console type="unix">
			<target type="serial" port="0"></target>
			<source mode="bind" path="/var/run/kubevirt-private/ea7e77f5-c65e-4a05-8537-bb5d108bb5b0/virt-serial0"></source>
			<alias name="serial0"></alias>
		</console>
	</devices>
	<clock offset="utc"></clock>
	<resource>
		<partition>/machine</partition>
	</resource>
	<metadata>
		<kubevirt xmlns="http://kubevirt.io">
			<uid>ea7e77f5-c65e-4a05-8537-bb5d108bb5b0</uid>
			<graceperiod>
				<deletionGracePeriodSeconds>0</deletionGracePeriodSeconds>
				<markedForGracefulShutdown>true</markedForGracefulShutdown>
			</graceperiod>
		</kubevirt>
	</metadata>
	<features>
		<acpi></acpi>
	</features>
	<cpu mode="custom">
		<model>EPYC</model>
		<feature name="acpi" policy="require"></feature>
		<feature name="ss" policy="require"></feature>
		<feature name="monitor" policy="require"></feature>
		<feature name="hypervisor" policy="require"></feature>
		<feature name="erms" policy="require"></feature>
		<feature name="mpx" policy="require"></feature>
		<feature name="pcommit" policy="require"></feature>
		<feature name="clwb" policy="require"></feature>
		<feature name="pku" policy="require"></feature>
		<feature name="la57" policy="require"></feature>
		<feature name="3dnowext" policy="require"></feature>
		<feature name="3dnow" policy="require"></feature>
		<feature name="npt" policy="require"></feature>
		<feature name="vme" policy="disable"></feature>
		<feature name="fma" policy="disable"></feature>
		<feature name="avx" policy="disable"></feature>
		<feature name="f16c" policy="disable"></feature>
		<feature name="avx2" policy="disable"></feature>
		<feature name="rdseed" policy="disable"></feature>
		<feature name="sha-ni" policy="disable"></feature>
		<feature name="xsavec" policy="disable"></feature>
		<feature name="fxsr_opt" policy="disable"></feature>
		<feature name="misalignsse" policy="disable"></feature>
		<feature name="3dnowprefetch" policy="disable"></feature>
		<feature name="osvw" policy="disable"></feature>
		<feature name="topoext" policy="disable"></feature>
		<feature name="nrip-save" policy="disable"></feature>
		<topology sockets="1" cores="1" threads="1"></topology>
	</cpu>
	<vcpu placement="static">1</vcpu>
	<iothreads>1</iothreads>
</domain>","name":"instance","namespace":"default","pos":"libvirt_helper.go:129","timestamp":"2022-06-20T00:24:18.051048Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Domain not running, paused or shut down, nothing to do.","name":"instance","namespace":"default","pos":"manager.go:1447","timestamp":"2022-06-20T00:24:18.052253Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Signaled vmi kill","name":"instance","namespace":"default","pos":"server.go:293","timestamp":"2022-06-20T00:24:18.052592Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Shutoff(5):Destroyed(2)","pos":"client.go:288","timestamp":"2022-06-20T00:24:18.053037Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:24:18.066974Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 0 with reason 1 received","pos":"client.go:435","timestamp":"2022-06-20T00:24:18.071268Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"VirtualMachineInstance","level":"info","msg":"Successfully signaled graceful shutdown","name":"instance","namespace":"default","pos":"virt-launcher.go:464","timestamp":"2022-06-20T00:24:18.071784Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Process 09282c40-dd16-5956-9212-defc51c8374f and pid 68 is gone!","pos":"monitor.go:148","timestamp":"2022-06-20T00:24:18.080208Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Waiting on final notifications to be sent to virt-handler.","pos":"virt-launcher.go:277","timestamp":"2022-06-20T00:24:18.080262Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Shutoff(5):Destroyed(2)","pos":"client.go:288","timestamp":"2022-06-20T00:24:18.074512Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:24:18.085896Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Domain undefined.","name":"instance","namespace":"default","pos":"manager.go:1470","timestamp":"2022-06-20T00:24:18.089516Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","kind":"","level":"info","msg":"Signaled vmi deletion","name":"instance","namespace":"default","pos":"server.go:329","timestamp":"2022-06-20T00:24:18.089661Z","uid":"ea7e77f5-c65e-4a05-8537-bb5d108bb5b0"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 1 with reason 0 received","pos":"client.go:435","timestamp":"2022-06-20T00:24:18.089610Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"error","msg":"failed to get domain metadata","pos":"libvirt_helper.go:167","reason":"virError(Code=42, Domain=10, Message='Domain not found: no domain with matching uuid '09282c40-dd16-5956-9212-defc51c8374f' (default_instance)')","timestamp":"2022-06-20T00:24:18.091006Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Shutoff(5):Destroyed(2)","pos":"client.go:288","timestamp":"2022-06-20T00:24:18.091558Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: default_instance","pos":"client.go:413","timestamp":"2022-06-20T00:24:18.097501Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Domain name event: ","pos":"client.go:413","timestamp":"2022-06-20T00:24:18.099477Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Final Delete notification sent","pos":"virt-launcher.go:292","timestamp":"2022-06-20T00:24:18.099507Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"stopping cmd server","pos":"server.go:580","timestamp":"2022-06-20T00:24:18.099976Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"error","msg":"timeout on stopping the cmd server, continuing anyway.","pos":"server.go:591","timestamp":"2022-06-20T00:24:19.100517Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Exiting...","pos":"virt-launcher.go:524","timestamp":"2022-06-20T00:24:19.100614Z"} | |
default/virt-launcher-instance-c8n66[compute]: {"component":"virt-launcher","level":"info","msg":"Reaped pid 14 with status 0","pos":"virt-launcher.go:554","timestamp":"2022-06-20T00:24:19.106035Z"} | |
default/molecule-knmpm[molecule]: molecule_kubevirt/test/test_init.py [32m.[0m[32m [ 40%][0m | |
default/molecule-knmpm[molecule]: molecule_kubevirt/test/test_scenario_tests.py [32m.[0m[32m.[0m[32m.[0m[32m [100%][0m | |
default/molecule-knmpm[molecule]: | |
default/molecule-knmpm[molecule]: ============================= slowest 10 durations ============================= | |
default/molecule-knmpm[molecule]: 373.98s call molecule_kubevirt/test/test_init.py::test_command_init_and_test_scenario | |
default/molecule-knmpm[molecule]: 23.19s setup molecule_kubevirt/test/test_scenario_tests.py::TestClass::test_instance_spec[kube-public-instance-full-instance-full-notmolecule] | |
default/molecule-knmpm[molecule]: 21.54s teardown molecule_kubevirt/test/test_scenario_tests.py::TestClass::test_instance_spec[default-instance-running-false--molecule] | |
default/molecule-knmpm[molecule]: 0.21s call molecule_kubevirt/test/test_scenario_tests.py::TestClass::test_instance_spec[default-instance-almost-default-instance-almost-default-molecule] | |
default/molecule-knmpm[molecule]: 0.20s call molecule_kubevirt/test/test_scenario_tests.py::TestClass::test_instance_spec[kube-public-instance-full-instance-full-notmolecule] | |
default/molecule-knmpm[molecule]: 0.10s call molecule_kubevirt/test/test_scenario_tests.py::TestClass::test_instance_spec[default-instance-running-false--molecule] | |
default/molecule-knmpm[molecule]: 0.02s call molecule_kubevirt/test/test_driver.py::test_driver_is_detected | |
default/molecule-knmpm[molecule]: | |
default/molecule-knmpm[molecule]: (3 durations < 0.005s hidden. Use -vv to show these durations.) | |
default/molecule-knmpm[molecule]: [32m======================== [32m[1m5 passed[0m[32m in 420.67s (0:07:00)[0m[32m =========================[0m | |
default/molecule-knmpm[molecule]: ___________________________________ summary ____________________________________ | |
default/molecule-knmpm[molecule]: py38-ansible_4: commands succeeded | |
default/molecule-knmpm[molecule]: congratulations :) | |
default/molecule-knmpm[molecule]: configmap/molecule-result created | |
default/molecule-knmpm[molecule]: configmap "molecule-job-running" deleted |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[ 0.000000] Linux version 5.13.0-1029-azure (buildd@lcy02-amd64-051) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #34~20.04.1-Ubuntu SMP Thu Jun 9 12:37:07 UTC 2022 (Ubuntu 5.13.0-1029.34~20.04.1-azure 5.13.19) | |
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
[ 0.000000] KERNEL supported cpus: | |
[ 0.000000] Intel GenuineIntel | |
[ 0.000000] AMD AuthenticAMD | |
[ 0.000000] Hygon HygonGenuine | |
[ 0.000000] Centaur CentaurHauls | |
[ 0.000000] zhaoxin Shanghai | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' | |
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' | |
[ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 | |
[ 0.000000] x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 | |
[ 0.000000] x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 | |
[ 0.000000] x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 | |
[ 0.000000] x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 | |
[ 0.000000] x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 | |
[ 0.000000] x86/fpu: Enabled xstate features 0xff, context size is 2560 bytes, using 'compacted' format. | |
[ 0.000000] BIOS-provided physical RAM map: | |
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable | |
[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved | |
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000003ffeffff] usable | |
[ 0.000000] BIOS-e820: [mem 0x000000003fff0000-0x000000003fffefff] ACPI data | |
[ 0.000000] BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] ACPI NVS | |
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable | |
[ 0.000000] printk: bootconsole [earlyser0] enabled | |
[ 0.000000] NX (Execute Disable) protection: active | |
[ 0.000000] SMBIOS 2.3 present. | |
[ 0.000000] DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008 12/07/2018 | |
[ 0.000000] Hypervisor detected: Microsoft Hyper-V | |
[ 0.000000] Hyper-V: privilege flags low 0x2e7f, high 0x3880b0, hints 0x60c2c, misc 0xed7b2 | |
[ 0.000000] Hyper-V Host Build:18362-10.0-3-0.3446 | |
[ 0.000000] Hyper-V: LAPIC Timer Frequency: 0xc3500 | |
[ 0.000000] Hyper-V: Using hypercall for remote TLB flush | |
[ 0.000000] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns | |
[ 0.000003] tsc: Marking TSC unstable due to running on Hyper-V | |
[ 0.003947] tsc: Detected 2095.196 MHz processor | |
[ 0.007461] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved | |
[ 0.007465] e820: remove [mem 0x000a0000-0x000fffff] usable | |
[ 0.007471] last_pfn = 0x280000 max_arch_pfn = 0x400000000 | |
[ 0.011169] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT | |
[ 0.015674] e820: update [mem 0x40000000-0xffffffff] usable ==> reserved | |
[ 0.017712] last_pfn = 0x3fff0 max_arch_pfn = 0x400000000 | |
[ 0.037293] found SMP MP-table at [mem 0x000ff780-0x000ff78f] | |
[ 0.041879] Using GB pages for direct mapping | |
[ 0.045225] ACPI: Early table checksum verification disabled | |
[ 0.048903] ACPI: RSDP 0x00000000000F5C00 000014 (v00 ACPIAM) | |
[ 0.053107] ACPI: RSDT 0x000000003FFF0000 000040 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.059059] ACPI: FACP 0x000000003FFF0200 000081 (v02 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.064931] ACPI: DSDT 0x000000003FFF1D24 003CD5 (v01 MSFTVM MSFTVM02 00000002 INTL 02002026) | |
[ 0.070524] ACPI: FACS 0x000000003FFFF000 000040 | |
[ 0.073665] ACPI: WAET 0x000000003FFF1A80 000028 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.079293] ACPI: SLIC 0x000000003FFF1AC0 000176 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.084927] ACPI: OEM0 0x000000003FFF1CC0 000064 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.090581] ACPI: SRAT 0x000000003FFF0800 000140 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) | |
[ 0.096149] ACPI: APIC 0x000000003FFF0300 000062 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.101541] ACPI: OEMB 0x000000003FFFF040 000064 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
[ 0.107261] ACPI: Reserving FACP table memory at [mem 0x3fff0200-0x3fff0280] | |
[ 0.111937] ACPI: Reserving DSDT table memory at [mem 0x3fff1d24-0x3fff59f8] | |
[ 0.116717] ACPI: Reserving FACS table memory at [mem 0x3ffff000-0x3ffff03f] | |
[ 0.122401] ACPI: Reserving WAET table memory at [mem 0x3fff1a80-0x3fff1aa7] | |
[ 0.127642] ACPI: Reserving SLIC table memory at [mem 0x3fff1ac0-0x3fff1c35] | |
[ 0.132692] ACPI: Reserving OEM0 table memory at [mem 0x3fff1cc0-0x3fff1d23] | |
[ 0.137512] ACPI: Reserving SRAT table memory at [mem 0x3fff0800-0x3fff093f] | |
[ 0.142124] ACPI: Reserving APIC table memory at [mem 0x3fff0300-0x3fff0361] | |
[ 0.146929] ACPI: Reserving OEMB table memory at [mem 0x3ffff040-0x3ffff0a3] | |
[ 0.151563] ACPI: Local APIC address 0xfee00000 | |
[ 0.151648] SRAT: PXM 0 -> APIC 0x00 -> Node 0 | |
[ 0.153929] SRAT: PXM 0 -> APIC 0x01 -> Node 0 | |
[ 0.156022] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug | |
[ 0.159039] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x27fffffff] hotplug | |
[ 0.162709] ACPI: SRAT: Node 0 PXM 0 [mem 0x280200000-0xfdfffffff] hotplug | |
[ 0.167239] ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug | |
[ 0.171701] ACPI: SRAT: Node 0 PXM 0 [mem 0x10000200000-0x1ffffffffff] hotplug | |
[ 0.176391] ACPI: SRAT: Node 0 PXM 0 [mem 0x20000200000-0x3ffffffffff] hotplug | |
[ 0.181214] NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x27fffffff] -> [mem 0x00000000-0x27fffffff] | |
[ 0.188131] NODE_DATA(0) allocated [mem 0x27ffd6000-0x27fffffff] | |
[ 0.192416] Zone ranges: | |
[ 0.194120] DMA [mem 0x0000000000001000-0x0000000000ffffff] | |
[ 0.198051] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] | |
[ 0.202033] Normal [mem 0x0000000100000000-0x000000027fffffff] | |
[ 0.206140] Device empty | |
[ 0.208000] Movable zone start for each node | |
[ 0.211357] Early memory node ranges | |
[ 0.213920] node 0: [mem 0x0000000000001000-0x000000000009efff] | |
[ 0.217904] node 0: [mem 0x0000000000100000-0x000000003ffeffff] | |
[ 0.221816] node 0: [mem 0x0000000100000000-0x000000027fffffff] | |
[ 0.225853] Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff] | |
[ 0.230517] On node 0 totalpages: 1834894 | |
[ 0.230519] DMA zone: 64 pages used for memmap | |
[ 0.230520] DMA zone: 158 pages reserved | |
[ 0.230521] DMA zone: 3998 pages, LIFO batch:0 | |
[ 0.230522] DMA32 zone: 4032 pages used for memmap | |
[ 0.230523] DMA32 zone: 258032 pages, LIFO batch:63 | |
[ 0.230524] Normal zone: 24576 pages used for memmap | |
[ 0.230525] Normal zone: 1572864 pages, LIFO batch:63 | |
[ 0.230528] On node 0, zone DMA: 1 pages in unavailable ranges | |
[ 0.230557] On node 0, zone DMA: 97 pages in unavailable ranges | |
[ 0.246881] On node 0, zone Normal: 16 pages in unavailable ranges | |
[ 0.255245] ACPI: PM-Timer IO Port: 0x408 | |
[ 0.262257] ACPI: Local APIC address 0xfee00000 | |
[ 0.262266] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) | |
[ 0.266656] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 | |
[ 0.270967] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) | |
[ 0.274506] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) | |
[ 0.278668] ACPI: IRQ0 used by override. | |
[ 0.278669] ACPI: IRQ9 used by override. | |
[ 0.278672] Using ACPI (MADT) for SMP configuration information | |
[ 0.282268] smpboot: Allowing 2 CPUs, 0 hotplug CPUs | |
[ 0.285528] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] | |
[ 0.290404] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] | |
[ 0.295457] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000dffff] | |
[ 0.300336] PM: hibernation: Registered nosave memory: [mem 0x000e0000-0x000fffff] | |
[ 0.305240] PM: hibernation: Registered nosave memory: [mem 0x3fff0000-0x3fffefff] | |
[ 0.310017] PM: hibernation: Registered nosave memory: [mem 0x3ffff000-0x3fffffff] | |
[ 0.315020] PM: hibernation: Registered nosave memory: [mem 0x40000000-0xffffffff] | |
[ 0.319897] [mem 0x40000000-0xffffffff] available for PCI devices | |
[ 0.323855] Booting paravirtualized kernel on Hyper-V | |
[ 0.327232] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns | |
[ 0.334359] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 | |
[ 0.340195] percpu: Embedded 63 pages/cpu s221184 r8192 d28672 u1048576 | |
[ 0.345437] pcpu-alloc: s221184 r8192 d28672 u1048576 alloc=1*2097152 | |
[ 0.345440] pcpu-alloc: [0] 0 1 | |
[ 0.345457] Hyper-V: PV spinlocks enabled | |
[ 0.348191] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) | |
[ 0.353030] Built 1 zonelists, mobility grouping on. Total pages: 1806064 | |
[ 0.358293] Policy zone: Normal | |
[ 0.360395] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
[ 0.372818] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) | |
[ 0.379554] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) | |
[ 0.384857] mem auto-init: stack:off, heap alloc:on, heap free:off | |
[ 0.419629] Memory: 7105684K/7339576K available (14346K kernel code, 3432K rwdata, 9780K rodata, 2608K init, 6104K bss, 233632K reserved, 0K cma-reserved) | |
[ 0.431854] random: get_random_u64 called from __kmem_cache_create+0x2d/0x440 with crng_init=0 | |
[ 0.431953] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 | |
[ 0.442442] Kernel/User page tables isolation: enabled | |
[ 0.446133] ftrace: allocating 46678 entries in 183 pages | |
[ 0.467444] ftrace: allocated 183 pages with 6 groups | |
[ 0.471091] rcu: Hierarchical RCU implementation. | |
[ 0.474330] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=2. | |
[ 0.479241] Rude variant of Tasks RCU enabled. | |
[ 0.482380] Tracing variant of Tasks RCU enabled. | |
[ 0.485788] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies. | |
[ 0.491001] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 | |
[ 0.500164] NR_IRQS: 524544, nr_irqs: 440, preallocated irqs: 16 | |
[ 0.505278] random: crng done (trusting CPU's manufacturer) | |
[ 0.514815] Console: colour VGA+ 80x25 | |
[ 0.594069] printk: console [tty1] enabled | |
[ 0.597555] printk: console [ttyS0] enabled | |
[ 0.603581] printk: bootconsole [earlyser0] disabled | |
[ 0.610807] ACPI: Core revision 20210331 | |
[ 0.614069] APIC: Switch to symmetric I/O mode setup | |
[ 0.618311] Hyper-V: Using IPI hypercalls | |
[ 0.622119] Hyper-V: Using enlightened APIC (xapic mode) | |
[ 0.634002] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 | |
[ 0.643239] Calibrating delay loop (skipped), value calculated using timer frequency.. 4190.39 BogoMIPS (lpj=8380784) | |
[ 0.647237] pid_max: default: 32768 minimum: 301 | |
[ 0.651258] LSM: Security Framework initializing | |
[ 0.654986] Yama: becoming mindful. | |
[ 0.655236] AppArmor: AppArmor initialized | |
[ 0.655272] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) | |
[ 0.659236] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) | |
[ 0.669065] Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 | |
[ 0.671237] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 | |
[ 0.675236] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization | |
[ 0.675237] Spectre V2 : Mitigation: Retpolines | |
[ 0.678825] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch | |
[ 0.679236] Speculative Store Bypass: Vulnerable | |
[ 0.682742] TAA: Mitigation: Clear CPU buffers | |
[ 0.683236] MDS: Mitigation: Clear CPU buffers | |
[ 0.689835] Freeing SMP alternatives memory: 40K | |
[ 0.691484] smpboot: CPU0: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x4) | |
[ 0.695403] Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. | |
[ 0.699272] rcu: Hierarchical SRCU implementation. | |
[ 0.703385] NMI watchdog: Perf NMI watchdog permanently disabled | |
[ 0.707291] smp: Bringing up secondary CPUs ... | |
[ 0.711140] x86: Booting SMP configuration: | |
[ 0.711240] .... node #0, CPUs: #1 | |
[ 0.711786] smp: Brought up 1 node, 2 CPUs | |
[ 0.718887] smpboot: Max logical packages: 1 | |
[ 0.719240] smpboot: Total of 2 processors activated (8380.78 BogoMIPS) | |
[ 0.723671] devtmpfs: initialized | |
[ 0.727236] x86/mm: Memory block size: 128MB | |
[ 0.731825] PM: Registering ACPI NVS region [mem 0x3ffff000-0x3fffffff] (4096 bytes) | |
[ 0.739286] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns | |
[ 0.751255] futex hash table entries: 512 (order: 3, 32768 bytes, linear) | |
[ 0.755406] pinctrl core: initialized pinctrl subsystem | |
[ 0.760151] PM: RTC time: 00:09:29, date: 2022-06-20 | |
[ 0.767428] NET: Registered protocol family 16 | |
[ 0.771400] DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations | |
[ 0.775472] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations | |
[ 0.783434] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations | |
[ 0.787251] audit: initializing netlink subsys (disabled) | |
[ 0.791295] audit: type=2000 audit(1655683769.152:1): state=initialized audit_enabled=0 res=1 | |
[ 0.791471] thermal_sys: Registered thermal governor 'fair_share' | |
[ 0.795242] thermal_sys: Registered thermal governor 'bang_bang' | |
[ 0.799239] thermal_sys: Registered thermal governor 'step_wise' | |
[ 0.803238] thermal_sys: Registered thermal governor 'user_space' | |
[ 0.807239] thermal_sys: Registered thermal governor 'power_allocator' | |
[ 0.811247] EISA bus registered | |
[ 0.817951] cpuidle: using governor ladder | |
[ 0.819241] cpuidle: using governor menu | |
[ 0.822949] ACPI: bus type PCI registered | |
[ 0.823259] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 | |
[ 0.827774] PCI: Using configuration type 1 for base access | |
[ 0.832657] Kprobes globally optimized | |
[ 0.835299] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages | |
[ 0.839247] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages | |
[ 0.847322] ACPI: Added _OSI(Module Device) | |
[ 0.851241] ACPI: Added _OSI(Processor Device) | |
[ 0.855044] ACPI: Added _OSI(3.0 _SCP Extensions) | |
[ 0.859243] ACPI: Added _OSI(Processor Aggregator Device) | |
[ 0.863240] ACPI: Added _OSI(Linux-Dell-Video) | |
[ 0.866962] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) | |
[ 0.871240] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) | |
[ 0.876515] ACPI: 1 ACPI AML tables successfully acquired and loaded | |
[ 0.881390] ACPI: Interpreter enabled | |
[ 0.887250] ACPI: (supports S0 S5) | |
[ 0.890348] ACPI: Using IOAPIC for interrupt routing | |
[ 0.891261] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug | |
[ 0.899471] ACPI: Enabled 1 GPEs in block 00 to 0F | |
[ 0.922192] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) | |
[ 0.927244] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] | |
[ 0.935248] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. | |
[ 0.943407] PCI host bridge to bus 0000:00 | |
[ 0.946747] pci_bus 0000:00: root bus resource [bus 00-ff] | |
[ 0.951241] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] | |
[ 0.959241] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] | |
[ 0.963248] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] | |
[ 0.971277] pci_bus 0000:00: root bus resource [mem 0x40000000-0xfffbffff window] | |
[ 0.975239] pci_bus 0000:00: root bus resource [mem 0xfe0000000-0xfffffffff window] | |
[ 0.983488] pci 0000:00:00.0: [8086:7192] type 00 class 0x060000 | |
[ 0.991170] pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 | |
[ 0.998980] pci 0000:00:07.1: [8086:7111] type 00 class 0x010180 | |
[ 1.005837] pci 0000:00:07.1: reg 0x20: [io 0xffa0-0xffaf] | |
[ 1.012233] pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] | |
[ 1.015239] pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] | |
[ 1.023239] pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] | |
[ 1.027239] pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] | |
[ 1.031925] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, | |
* this clock source is slow. Consider trying other clock sources | |
[ 1.043241] pci 0000:00:07.3: acpi_pm_check_blacklist+0x0/0x20 took 11718 usecs | |
[ 1.047239] pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 | |
[ 1.058480] pci 0000:00:07.3: quirk: [io 0x0400-0x043f] claimed by PIIX4 ACPI | |
[ 1.064526] pci 0000:00:08.0: [1414:5353] type 00 class 0x030000 | |
[ 1.067930] pci 0000:00:08.0: reg 0x10: [mem 0xf8000000-0xfbffffff] | |
[ 1.077288] pci 0000:00:08.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] | |
[ 1.098813] ACPI: PCI: Interrupt link LNKA configured for IRQ 11 | |
[ 1.103481] ACPI: PCI: Interrupt link LNKB configured for IRQ 0 | |
[ 1.111241] ACPI: PCI: Interrupt link LNKB disabled | |
[ 1.115250] ACPI: PCI: Interrupt link LNKC configured for IRQ 0 | |
[ 1.119239] ACPI: PCI: Interrupt link LNKC disabled | |
[ 1.123319] ACPI: PCI: Interrupt link LNKD configured for IRQ 0 | |
[ 1.127239] ACPI: PCI: Interrupt link LNKD disabled | |
[ 1.131379] iommu: Default domain type: Translated | |
[ 1.135397] SCSI subsystem initialized | |
[ 1.138665] libata version 3.00 loaded. | |
[ 1.138665] pci 0000:00:08.0: vgaarb: setting as boot VGA device | |
[ 1.139236] pci 0000:00:08.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none | |
[ 1.147253] pci 0000:00:08.0: vgaarb: bridge control possible | |
[ 1.151240] vgaarb: loaded | |
[ 1.153784] ACPI: bus type USB registered | |
[ 1.159256] usbcore: registered new interface driver usbfs | |
[ 1.163244] usbcore: registered new interface driver hub | |
[ 1.167248] usbcore: registered new device driver usb | |
[ 1.171254] pps_core: LinuxPPS API ver. 1 registered | |
[ 1.175239] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <[email protected]> | |
[ 1.183241] PTP clock support registered | |
[ 1.187290] EDAC MC: Ver: 3.0.0 | |
[ 1.191934] hv_vmbus: Vmbus version:4.0 | |
[ 1.195290] hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60 | |
[ 1.199738] NetLabel: Initializing | |
[ 1.202361] NetLabel: domain hash size = 128 | |
[ 1.203238] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO | |
[ 1.207253] NetLabel: unlabeled traffic allowed by default | |
[ 1.211274] PCI: Using ACPI for IRQ routing | |
[ 1.214597] PCI: pci_cache_line_size set to 64 bytes | |
[ 1.215103] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] | |
[ 1.215105] e820: reserve RAM buffer [mem 0x3fff0000-0x3fffffff] | |
[ 1.215327] clocksource: Switched to clocksource hyperv_clocksource_tsc_page | |
[ 1.233113] VFS: Disk quotas dquot_6.6.0 | |
[ 1.236625] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) | |
[ 1.242163] AppArmor: AppArmor Filesystem Enabled | |
[ 1.246012] pnp: PnP ACPI init | |
[ 1.248792] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) | |
[ 1.248840] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) | |
[ 1.248881] pnp 00:02: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) | |
[ 1.249610] pnp 00:03: [dma 0 disabled] | |
[ 1.249634] pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) | |
[ 1.250411] pnp 00:04: [dma 0 disabled] | |
[ 1.250432] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) | |
[ 1.251085] pnp 00:05: [dma 2] | |
[ 1.251131] pnp 00:05: Plug and Play ACPI device, IDs PNP0700 (active) | |
[ 1.251168] system 00:06: [io 0x01e0-0x01ef] has been reserved | |
[ 1.255876] system 00:06: [io 0x0160-0x016f] has been reserved | |
[ 1.261066] system 00:06: [io 0x0278-0x027f] has been reserved | |
[ 1.265610] system 00:06: [io 0x0378-0x037f] has been reserved | |
[ 1.270172] system 00:06: [io 0x0678-0x067f] has been reserved | |
[ 1.274668] system 00:06: [io 0x0778-0x077f] has been reserved | |
[ 1.279293] system 00:06: [io 0x04d0-0x04d1] has been reserved | |
[ 1.284494] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) | |
[ 1.284635] system 00:07: [io 0x0400-0x043f] has been reserved | |
[ 1.289569] system 00:07: [io 0x0370-0x0371] has been reserved | |
[ 1.294219] system 00:07: [io 0x0440-0x044f] has been reserved | |
[ 1.299006] system 00:07: [mem 0xfec00000-0xfec00fff] could not be reserved | |
[ 1.305015] system 00:07: [mem 0xfee00000-0xfee00fff] has been reserved | |
[ 1.310557] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) | |
[ 1.310651] system 00:08: [mem 0x00000000-0x0009ffff] could not be reserved | |
[ 1.315768] system 00:08: [mem 0x000c0000-0x000dffff] could not be reserved | |
[ 1.321686] system 00:08: [mem 0x000e0000-0x000fffff] could not be reserved | |
[ 1.328973] system 00:08: [mem 0x00100000-0x3fffffff] could not be reserved | |
[ 1.334464] system 00:08: [mem 0xfffc0000-0xffffffff] has been reserved | |
[ 1.339778] system 00:08: Plug and Play ACPI device, IDs PNP0c01 (active) | |
[ 1.340045] pnp: PnP ACPI: found 9 devices | |
[ 1.350659] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns | |
[ 1.357522] NET: Registered protocol family 2 | |
[ 1.361501] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) | |
[ 1.368607] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) | |
[ 1.375122] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) | |
[ 1.381290] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) | |
[ 1.386786] TCP: Hash tables configured (established 65536 bind 65536) | |
[ 1.392093] MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear) | |
[ 1.397618] UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) | |
[ 1.402978] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) | |
[ 1.408458] NET: Registered protocol family 1 | |
[ 1.411954] NET: Registered protocol family 44 | |
[ 1.415639] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] | |
[ 1.420688] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] | |
[ 1.425489] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] | |
[ 1.430895] pci_bus 0000:00: resource 7 [mem 0x40000000-0xfffbffff window] | |
[ 1.435974] pci_bus 0000:00: resource 8 [mem 0xfe0000000-0xfffffffff window] | |
[ 1.441564] pci 0000:00:00.0: Limiting direct PCI/PCI transfers | |
[ 1.446435] PCI: CLS 0 bytes, default 64 | |
[ 1.449756] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) | |
[ 1.454686] software IO TLB: mapped [mem 0x000000003bff0000-0x000000003fff0000] (64MB) | |
[ 1.461244] Initialise system trusted keyrings | |
[ 1.467715] Key type blacklist registered | |
[ 1.472088] workingset: timestamp_bits=36 max_order=21 bucket_order=0 | |
[ 1.479500] zbud: loaded | |
[ 1.483084] squashfs: version 4.0 (2009/01/31) Phillip Lougher | |
[ 1.489495] fuse: init (API version 7.34) | |
[ 1.493809] integrity: Platform Keyring initialized | |
[ 1.508299] Key type asymmetric registered | |
[ 1.511850] Asymmetric key parser 'x509' registered | |
[ 1.515651] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 244) | |
[ 1.521342] io scheduler mq-deadline registered | |
[ 1.526057] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 | |
[ 1.532006] hv_vmbus: registering driver hv_pci | |
[ 1.536365] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 | |
[ 1.543606] ACPI: button: Power Button [PWRF] | |
[ 1.548913] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled | |
[ 1.585747] 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A | |
[ 1.621384] 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A | |
[ 1.629535] Linux agpgart interface v0.103 | |
[ 1.759346] loop: module loaded | |
[ 1.763522] hv_vmbus: registering driver hv_storvsc | |
[ 1.768769] ata_piix 0000:00:07.1: version 2.13 | |
[ 1.769339] ata_piix 0000:00:07.1: Hyper-V Virtual Machine detected, ATA device ignore set | |
[ 1.775353] scsi host0: storvsc_host_t | |
[ 1.777881] scsi host3: storvsc_host_t | |
[ 1.781835] scsi host4: ata_piix | |
[ 1.785505] scsi host2: storvsc_host_t | |
[ 1.788420] scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 | |
[ 1.792139] scsi host1: storvsc_host_t | |
[ 1.798871] scsi host5: ata_piix | |
[ 1.802323] scsi: waiting for bus probes to complete ... | |
[ 1.805352] ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0xffa0 irq 14 | |
[ 1.815387] ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15 | |
[ 1.815587] sd 0:0:0:0: Attached scsi generic sg0 type 0 | |
[ 1.820882] tun: Universal TUN/TAP device driver, 1.6 | |
[ 1.830389] PPP generic driver version 2.4.2 | |
[ 1.831493] sd 0:0:0:0: [sda] 180355072 512-byte logical blocks: (92.3 GB/86.0 GiB) | |
[ 1.834492] VFIO - User Level meta-driver version: 0.3 | |
[ 1.840523] sd 0:0:0:0: [sda] 4096-byte physical blocks | |
[ 1.844928] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 | |
[ 1.849547] sd 0:0:0:0: [sda] Write Protect is off | |
[ 1.859768] sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 | |
[ 1.859930] sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA | |
[ 1.861908] serio: i8042 KBD port at 0x60,0x64 irq 1 | |
[ 1.871116] serio: i8042 AUX port at 0x60,0x64 irq 12 | |
[ 1.876766] scsi 1:0:1:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 | |
[ 1.879269] mousedev: PS/2 mouse device common for all mice | |
[ 1.885180] sda: sda1 sda14 sda15 | |
[ 1.889618] rtc_cmos 00:00: RTC can wake from S4 | |
[ 1.897488] sd 1:0:1:0: Attached scsi generic sg1 type 0 | |
[ 1.901049] rtc_cmos 00:00: registered as rtc0 | |
[ 1.906971] rtc_cmos 00:00: setting system clock to 2022-06-20T00:09:30 UTC (1655683770) | |
[ 1.906981] sd 1:0:1:0: [sdb] 29360128 512-byte logical blocks: (15.0 GB/14.0 GiB) | |
[ 1.913258] rtc_cmos 00:00: alarms up to one month, 114 bytes nvram | |
[ 1.924677] device-mapper: uevent: version 1.0.3 | |
[ 1.924702] sd 1:0:1:0: [sdb] Write Protect is off | |
[ 1.930269] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 | |
[ 1.933307] sd 1:0:1:0: [sdb] Mode Sense: 0f 00 10 00 | |
[ 1.940560] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: [email protected] | |
[ 1.943526] sd 0:0:0:0: [sda] Attached SCSI disk | |
[ 1.947552] platform eisa.0: Probing EISA bus 0 | |
[ 1.955489] platform eisa.0: EISA: Cannot allocate resource for mainboard | |
[ 1.955497] sd 1:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA | |
[ 1.960667] platform eisa.0: Cannot allocate resource for EISA slot 1 | |
[ 1.960669] platform eisa.0: Cannot allocate resource for EISA slot 2 | |
[ 1.960669] platform eisa.0: Cannot allocate resource for EISA slot 3 | |
[ 1.960670] platform eisa.0: Cannot allocate resource for EISA slot 4 | |
[ 1.960671] platform eisa.0: Cannot allocate resource for EISA slot 5 | |
[ 1.960672] platform eisa.0: Cannot allocate resource for EISA slot 6 | |
[ 1.960673] platform eisa.0: Cannot allocate resource for EISA slot 7 | |
[ 1.990987] ata1.01: host indicates ignore ATA devices, ignored | |
[ 1.993908] platform eisa.0: Cannot allocate resource for EISA slot 8 | |
[ 1.993910] platform eisa.0: EISA: Detected 0 cards | |
[ 1.993913] intel_pstate: CPU model not supported | |
[ 1.993996] drop_monitor: Initializing network drop monitor service | |
[ 2.001369] ata1.00: host indicates ignore ATA devices, ignored | |
[ 2.005757] NET: Registered protocol family 10 | |
[ 2.030051] Segment Routing with IPv6 | |
[ 2.030169] sdb: sdb1 | |
[ 2.034025] NET: Registered protocol family 17 | |
[ 2.041143] Key type dns_resolver registered | |
[ 2.045072] No MBM correction factor available | |
[ 2.048937] IPI shorthand broadcast: enabled | |
[ 2.052429] sched_clock: Marking stable (1918415800, 133997700)->(2135540900, -83127400) | |
[ 2.059117] registered taskstats version 1 | |
[ 2.062796] Loading compiled-in X.509 certificates | |
[ 2.067580] Loaded X.509 cert 'Build time autogenerated kernel key: 07f5640c3d7bf043074dc27d0b5799302e473486' | |
[ 2.075411] Loaded X.509 cert 'Canonical Ltd. Live Patch Signing: 14df34d1a87cf37625abec039ef2bf521249b969' | |
[ 2.083182] Loaded X.509 cert 'Canonical Ltd. Kernel Module Signing: 88f752e560a1e0737e31163a466ad7b70a850c19' | |
[ 2.090710] blacklist: Loading compiled-in revocation X.509 certificates | |
[ 2.096163] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing: 61482aa2830d0ab2ad5af10b7250da9033ddcef0' | |
[ 2.115323] zswap: loaded using pool lzo/zbud | |
[ 2.119178] Key type ._fscrypt registered | |
[ 2.122476] Key type .fscrypt registered | |
[ 2.127703] Key type fscrypt-provisioning registered | |
[ 2.131587] sd 1:0:1:0: [sdb] Attached SCSI disk | |
[ 2.135937] Key type encrypted registered | |
[ 2.139275] AppArmor: AppArmor sha1 policy hashing enabled | |
[ 2.143402] ima: No TPM chip found, activating TPM-bypass! | |
[ 2.147673] Loading compiled-in module X.509 certificates | |
[ 2.152851] Loaded X.509 cert 'Build time autogenerated kernel key: 07f5640c3d7bf043074dc27d0b5799302e473486' | |
[ 2.159620] ima: Allocated hash algorithm: sha1 | |
[ 2.163299] ima: No architecture policies found | |
[ 2.166847] evm: Initialising EVM extended attributes: | |
[ 2.170616] evm: security.selinux | |
[ 2.173375] evm: security.SMACK64 | |
[ 2.176151] evm: security.SMACK64EXEC | |
[ 2.179079] evm: security.SMACK64TRANSMUTE | |
[ 2.184144] evm: security.SMACK64MMAP | |
[ 2.188524] evm: security.apparmor | |
[ 2.192711] evm: security.ima | |
[ 2.195873] evm: security.capability | |
[ 2.199639] evm: HMAC attrs: 0x1 | |
[ 2.204404] PM: Magic number: 10:116:153 | |
[ 2.210374] RAS: Correctable Errors collector initialized. | |
[ 2.217450] md: Waiting for all devices to be available before autodetect | |
[ 2.224175] md: If you don't use raid, use raid=noautodetect | |
[ 2.230137] md: Autodetecting RAID arrays. | |
[ 2.234718] md: autorun ... | |
[ 2.237950] md: ... autorun DONE. | |
[ 2.245406] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. | |
[ 2.254982] VFS: Mounted root (ext4 filesystem) readonly on device 8:1. | |
[ 2.261710] devtmpfs: mounted | |
[ 2.266005] Freeing unused decrypted memory: 2036K | |
[ 2.271993] Freeing unused kernel image (initmem) memory: 2608K | |
[ 2.283320] Write protecting the kernel read-only data: 26624k | |
[ 2.290314] Freeing unused kernel image (text/rodata gap) memory: 2036K | |
[ 2.297019] Freeing unused kernel image (rodata/data gap) memory: 460K | |
[ 2.367601] x86/mm: Checked W+X mappings: passed, no W+X pages found. | |
[ 2.372927] x86/mm: Checking user space page tables | |
[ 2.434835] x86/mm: Checked W+X mappings: passed, no W+X pages found. | |
[ 2.440223] Run /sbin/init as init process | |
[ 2.443204] with arguments: | |
[ 2.443205] /sbin/init | |
[ 2.443206] with environment: | |
[ 2.443207] HOME=/ | |
[ 2.443208] TERM=linux | |
[ 2.443209] BOOT_IMAGE=/boot/vmlinuz-5.13.0-1029-azure | |
[ 2.549627] systemd[1]: Inserted module 'autofs4' | |
[ 2.568658] systemd[1]: systemd 245.4-4ubuntu3.17 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid) | |
[ 2.580375] systemd[1]: Detected virtualization microsoft. | |
[ 2.583537] systemd[1]: Detected architecture x86-64. | |
[ 2.604092] systemd[1]: Set hostname to <fv-az72-309>. | |
[ 2.777938] systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
[ 2.786328] systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
[ 2.798000] systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
[ 2.855931] systemd[1]: Unnecessary job for /sys/devices/virtual/misc/vmbus!hv_fcopy was removed. | |
[ 2.862804] systemd[1]: Unnecessary job for /sys/devices/virtual/misc/vmbus!hv_vss was removed. | |
[ 2.870773] systemd[1]: Created slice Slice for Azure VM Agent and Extensions. | |
[ 2.887880] systemd[1]: Created slice system-modprobe.slice. | |
[ 2.896196] systemd[1]: Created slice system-serial\x2dgetty.slice. | |
[ 2.905037] systemd[1]: Created slice system-systemd\x2dfsck.slice. | |
[ 2.914151] systemd[1]: Created slice User and Session Slice. | |
[ 2.922529] systemd[1]: Started Forward Password Requests to Wall Directory Watch. | |
[ 2.933020] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. | |
[ 2.944377] systemd[1]: Reached target User and Group Name Lookups. | |
[ 2.954032] systemd[1]: Reached target Slices. | |
[ 2.960848] systemd[1]: Reached target Swap. | |
[ 2.966910] systemd[1]: Reached target System Time Set. | |
[ 2.974011] systemd[1]: Listening on Device-mapper event daemon FIFOs. | |
[ 2.984313] systemd[1]: Listening on LVM2 poll daemon socket. | |
[ 2.992189] systemd[1]: Listening on multipathd control socket. | |
[ 3.000530] systemd[1]: Listening on Syslog Socket. | |
[ 3.007510] systemd[1]: Listening on fsck to fsckd communication Socket. | |
[ 3.016525] systemd[1]: Listening on initctl Compatibility Named Pipe. | |
[ 3.026784] systemd[1]: Listening on Journal Audit Socket. | |
[ 3.034444] systemd[1]: Listening on Journal Socket (/dev/log). | |
[ 3.042507] systemd[1]: Listening on Journal Socket. | |
[ 3.049607] systemd[1]: Listening on Network Service Netlink Socket. | |
[ 3.058458] systemd[1]: Listening on udev Control Socket. | |
[ 3.068440] systemd[1]: Listening on udev Kernel Socket. | |
[ 3.076832] systemd[1]: Mounting Huge Pages File System... | |
[ 3.085236] systemd[1]: Mounting POSIX Message Queue File System... | |
[ 3.096003] systemd[1]: Mounting Kernel Debug File System... | |
[ 3.104508] systemd[1]: Mounting Kernel Trace File System... | |
[ 3.113953] systemd[1]: Starting Journal Service... | |
[ 3.122599] systemd[1]: Starting Set the console keyboard layout... | |
[ 3.132799] systemd[1]: Starting Create list of static device nodes for the current kernel... | |
[ 3.145710] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... | |
[ 3.164138] systemd[1]: Starting Load Kernel Module drm... | |
[ 3.176384] systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped. | |
[ 3.185582] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped. | |
[ 3.193555] systemd[1]: Starting File System Check on Root Device... | |
[ 3.208194] systemd[1]: Starting Load Kernel Modules... | |
[ 3.220951] systemd[1]: Starting udev Coldplug all Devices... | |
[ 3.230545] systemd[1]: Starting Uncomplicated firewall... | |
[ 3.240173] systemd[1]: Starting Setup network rules for WALinuxAgent... | |
[ 3.251005] systemd[1]: Mounted Huge Pages File System. | |
[ 3.258335] systemd[1]: Mounted POSIX Message Queue File System. | |
[ 3.263139] IPMI message handler: version 39.2 | |
[ 3.269778] systemd[1]: Mounted Kernel Debug File System. | |
[ 3.277611] systemd[1]: Started Journal Service. | |
[ 3.285221] ipmi device interface | |
[ 3.390859] EXT4-fs (sda1): re-mounted. Opts: discard. Quota mode: none. | |
[ 3.415983] systemd-journald[189]: Received client request to flush runtime journal. | |
[ 3.774545] hv_vmbus: registering driver hyperv_fb | |
[ 3.775538] hyperv_fb: Synthvid Version major 3, minor 5 | |
[ 3.775586] hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 | |
[ 3.775590] hyperv_fb: Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead. | |
[ 3.778864] Console: switching to colour frame buffer device 128x48 | |
[ 3.791655] hid: raw HID events driver (C) Jiri Kosina | |
[ 3.794288] hv_vmbus: registering driver hyperv_keyboard | |
[ 3.794936] input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:07/VMBUS:01/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio2/input/input3 | |
[ 3.795652] hv_vmbus: registering driver hid_hyperv | |
[ 3.796114] input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input4 | |
[ 3.796162] hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on | |
[ 3.800680] hv_vmbus: registering driver hv_balloon | |
[ 3.801323] hv_balloon: Using Dynamic Memory protocol version 2.0 | |
[ 3.802112] hv_utils: Registering HyperV Utility Driver | |
[ 3.802114] hv_vmbus: registering driver hv_utils | |
[ 3.802498] hv_utils: Heartbeat IC version 3.0 | |
[ 3.802894] hv_utils: TimeSync IC version 4.0 | |
[ 3.803014] hv_utils: Shutdown IC version 3.2 | |
[ 3.835525] hv_vmbus: registering driver hv_netvsc | |
[ 3.908860] cryptd: max_cpu_qlen set to 1000 | |
[ 3.918334] AVX2 version of gcm_enc/dec engaged. | |
[ 3.918392] AES CTR mode by8 optimization enabled | |
[ 4.049638] bpfilter: Loaded bpfilter_umh pid 297 | |
[ 4.051439] Started bpfilter | |
[ 4.130138] hv_utils: KVP IC version 4.0 | |
[ 4.477911] alua: device handler registered | |
[ 4.478997] emc: device handler registered | |
[ 4.480520] rdac: device handler registered | |
[ 4.534218] loop0: detected capacity change from 0 to 126824 | |
[ 4.547211] loop1: detected capacity change from 0 to 96160 | |
[ 4.560510] loop2: detected capacity change from 0 to 138880 | |
[ 4.663418] audit: type=1400 audit(1655683776.240:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lsb_release" pid=438 comm="apparmor_parser" | |
[ 4.670890] audit: type=1400 audit(1655683776.248:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=437 comm="apparmor_parser" | |
[ 4.670895] audit: type=1400 audit(1655683776.248:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=437 comm="apparmor_parser" | |
[ 4.670897] audit: type=1400 audit(1655683776.248:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=437 comm="apparmor_parser" | |
[ 4.670899] audit: type=1400 audit(1655683776.248:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/{,usr/}sbin/dhclient" pid=437 comm="apparmor_parser" | |
[ 4.673295] audit: type=1400 audit(1655683776.252:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/haveged" pid=441 comm="apparmor_parser" | |
[ 4.681775] audit: type=1400 audit(1655683776.260:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/mysqld" pid=444 comm="apparmor_parser" | |
[ 4.687573] audit: type=1400 audit(1655683776.264:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=445 comm="apparmor_parser" | |
[ 4.687582] audit: type=1400 audit(1655683776.264:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=445 comm="apparmor_parser" | |
[ 5.528109] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
[ 8.388243] sdb: sdb1 | |
[ 9.290071] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. | |
[ 10.751284] Adding 4194300k swap on /mnt/swapfile. Priority:-2 extents:9 across:4505596k FS | |
[ 11.884750] aufs 5.x-rcN-20210809 | |
[ 13.098160] kauditd_printk_skb: 22 callbacks suppressed | |
[ 13.098165] audit: type=1400 audit(1655683784.674:33): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=1019 comm="apparmor_parser" | |
[ 13.605076] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. | |
[ 13.610458] Bridge firewalling registered | |
[ 13.847391] Initializing XFRM netlink socket | |
[ 15.370396] loop3: detected capacity change from 0 to 8 | |
[ 51.904594] hv_balloon: Max. dynamic memory size: 7168 MB | |
[ 122.843239] docker0: port 1(vethf1b0b28) entered blocking state | |
[ 122.843243] docker0: port 1(vethf1b0b28) entered disabled state | |
[ 122.843949] device vethf1b0b28 entered promiscuous mode | |
[ 122.853633] docker0: port 1(vethf1b0b28) entered blocking state | |
[ 122.853637] docker0: port 1(vethf1b0b28) entered forwarding state | |
[ 122.853683] docker0: port 1(vethf1b0b28) entered disabled state | |
[ 123.050679] eth0: renamed from veth63d8682 | |
[ 123.070371] IPv6: ADDRCONF(NETDEV_CHANGE): vethf1b0b28: link becomes ready | |
[ 123.070415] docker0: port 1(vethf1b0b28) entered blocking state | |
[ 123.070417] docker0: port 1(vethf1b0b28) entered forwarding state | |
[ 123.070449] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready | |
[ 123.145475] docker0: port 1(vethf1b0b28) entered disabled state | |
[ 123.146381] veth63d8682: renamed from eth0 | |
[ 123.185593] docker0: port 1(vethf1b0b28) entered disabled state | |
[ 123.186189] device vethf1b0b28 left promiscuous mode | |
[ 123.186194] docker0: port 1(vethf1b0b28) entered disabled state | |
[ 123.313707] docker0: port 1(vethfb268c1) entered blocking state | |
[ 123.313712] docker0: port 1(vethfb268c1) entered disabled state | |
[ 123.313796] device vethfb268c1 entered promiscuous mode | |
[ 123.313978] docker0: port 1(vethfb268c1) entered blocking state | |
[ 123.314003] docker0: port 1(vethfb268c1) entered forwarding state | |
[ 123.314098] docker0: port 1(vethfb268c1) entered disabled state | |
[ 123.515861] eth0: renamed from vethecca8aa | |
[ 123.534376] IPv6: ADDRCONF(NETDEV_CHANGE): vethfb268c1: link becomes ready | |
[ 123.534420] docker0: port 1(vethfb268c1) entered blocking state | |
[ 123.534423] docker0: port 1(vethfb268c1) entered forwarding state | |
[ 127.321950] docker0: port 1(vethfb268c1) entered disabled state | |
[ 127.323417] vethecca8aa: renamed from eth0 | |
[ 127.381466] docker0: port 1(vethfb268c1) entered disabled state | |
[ 127.382218] device vethfb268c1 left promiscuous mode | |
[ 127.382222] docker0: port 1(vethfb268c1) entered disabled state | |
[ 130.977631] br-d1e9d479f443: port 1(vethc0cce7e) entered blocking state | |
[ 130.977636] br-d1e9d479f443: port 1(vethc0cce7e) entered disabled state | |
[ 130.979736] device vethc0cce7e entered promiscuous mode | |
[ 131.314967] eth0: renamed from veth37c7cf5 | |
[ 131.333674] IPv6: ADDRCONF(NETDEV_CHANGE): vethc0cce7e: link becomes ready | |
[ 131.333714] br-d1e9d479f443: port 1(vethc0cce7e) entered blocking state | |
[ 131.333718] br-d1e9d479f443: port 1(vethc0cce7e) entered forwarding state | |
[ 131.333751] IPv6: ADDRCONF(NETDEV_CHANGE): br-d1e9d479f443: link becomes ready | |
[ 131.942844] systemd-journald[180]: Received client request to flush runtime journal. | |
[ 162.921398] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) | |
[ 162.921414] IPVS: Connection hash table configured (size=4096, memory=64Kbytes) | |
[ 162.921504] IPVS: ipvs loaded. | |
[ 162.926542] IPVS: [rr] scheduler registered. | |
[ 162.930988] IPVS: [wrr] scheduler registered. | |
[ 162.934508] IPVS: [sh] scheduler registered. | |
[ 163.823643] docker0: port 1(vethf3c0380) entered blocking state | |
[ 163.823648] docker0: port 1(vethf3c0380) entered disabled state | |
[ 163.823689] device vethf3c0380 entered promiscuous mode | |
[ 164.071128] eth0: renamed from veth76d993e | |
[ 164.087408] docker0: port 1(vethf3c0380) entered blocking state | |
[ 164.087413] docker0: port 1(vethf3c0380) entered forwarding state | |
[ 173.054451] audit: type=1400 audit(1655683944.647:34): apparmor="STATUS" operation="profile_load" profile="unconfined" name="virt-aa-helper" pid=11337 comm="apparmor_parser" | |
[ 173.129330] audit: type=1400 audit(1655683944.719:35): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirtd" pid=11343 comm="apparmor_parser" | |
[ 173.130841] audit: type=1400 audit(1655683944.723:36): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirtd//qemu_bridge_helper" pid=11343 comm="apparmor_parser" | |
[ 175.918861] virbr0: port 1(virbr0-nic) entered blocking state | |
[ 175.918866] virbr0: port 1(virbr0-nic) entered disabled state | |
[ 175.918954] device virbr0-nic entered promiscuous mode | |
[ 176.287042] virbr0: port 1(virbr0-nic) entered blocking state | |
[ 176.287047] virbr0: port 1(virbr0-nic) entered listening state | |
[ 176.353283] virbr0: port 1(virbr0-nic) entered disabled state | |
[ 187.974699] audit: type=1400 audit(1655683959.568:37): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=12686 comm="apparmor_parser" | |
[ 187.974706] audit: type=1400 audit(1655683959.568:38): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=12686 comm="apparmor_parser" | |
[ 187.974709] audit: type=1400 audit(1655683959.568:39): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=12686 comm="apparmor_parser" | |
[ 187.974711] audit: type=1400 audit(1655683959.568:40): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/{,usr/}sbin/dhclient" pid=12686 comm="apparmor_parser" | |
[ 187.984624] audit: type=1400 audit(1655683959.576:41): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lsb_release" pid=12689 comm="apparmor_parser" | |
[ 188.003895] audit: type=1400 audit(1655683959.596:42): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/haveged" pid=12698 comm="apparmor_parser" | |
[ 188.017221] audit: type=1400 audit(1655683959.612:43): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/mysqld" pid=12701 comm="apparmor_parser" | |
[ 188.036963] audit: type=1400 audit(1655683959.632:44): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="virt-aa-helper" pid=12706 comm="apparmor_parser" | |
[ 188.041828] audit: type=1400 audit(1655683959.636:45): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/bin/man" pid=12713 comm="apparmor_parser" | |
[ 188.041832] audit: type=1400 audit(1655683959.636:46): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="man_filter" pid=12713 comm="apparmor_parser" | |
[ 189.269764] docker0: port 2(vethf6689f9) entered blocking state | |
[ 189.269768] docker0: port 2(vethf6689f9) entered disabled state | |
[ 189.269823] device vethf6689f9 entered promiscuous mode | |
[ 189.388186] docker0: port 3(vethfeb4b6f) entered blocking state | |
[ 189.388191] docker0: port 3(vethfeb4b6f) entered disabled state | |
[ 189.388251] device vethfeb4b6f entered promiscuous mode | |
[ 189.388344] docker0: port 3(vethfeb4b6f) entered blocking state | |
[ 189.388346] docker0: port 3(vethfeb4b6f) entered forwarding state | |
[ 189.629134] eth0: renamed from vethc766298 | |
[ 189.649308] docker0: port 3(vethfeb4b6f) entered disabled state | |
[ 189.649341] docker0: port 2(vethf6689f9) entered blocking state | |
[ 189.649343] docker0: port 2(vethf6689f9) entered forwarding state | |
[ 189.681035] eth0: renamed from vethe33eb6d | |
[ 189.697651] docker0: port 3(vethfeb4b6f) entered blocking state | |
[ 189.697656] docker0: port 3(vethfeb4b6f) entered forwarding state | |
[ 195.824288] docker0: port 4(vethc7ff58d) entered blocking state | |
[ 195.824292] docker0: port 4(vethc7ff58d) entered disabled state | |
[ 195.824365] device vethc7ff58d entered promiscuous mode | |
[ 196.104666] eth0: renamed from veth9d45b28 | |
[ 196.124660] docker0: port 4(vethc7ff58d) entered blocking state | |
[ 196.124665] docker0: port 4(vethc7ff58d) entered forwarding state | |
[ 200.822457] docker0: port 5(vethdc7f949) entered blocking state | |
[ 200.822462] docker0: port 5(vethdc7f949) entered disabled state | |
[ 200.822743] device vethdc7f949 entered promiscuous mode | |
[ 202.012140] eth0: renamed from veth212df7a | |
[ 202.024114] docker0: port 5(vethdc7f949) entered blocking state | |
[ 202.024119] docker0: port 5(vethdc7f949) entered forwarding state | |
[ 215.628628] docker0: port 6(vethe779ac0) entered blocking state | |
[ 215.628633] docker0: port 6(vethe779ac0) entered disabled state | |
[ 215.628694] device vethe779ac0 entered promiscuous mode | |
[ 215.871345] eth0: renamed from veth24127ed | |
[ 215.890951] docker0: port 6(vethe779ac0) entered blocking state | |
[ 215.890955] docker0: port 6(vethe779ac0) entered forwarding state | |
[ 222.610724] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. | |
[ 222.610728] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <[email protected]>. All Rights Reserved. | |
[ 223.025912] docker0: port 6(vethe779ac0) entered disabled state | |
[ 223.025979] veth24127ed: renamed from eth0 | |
[ 223.064463] docker0: port 6(vethe779ac0) entered disabled state | |
[ 223.064999] device vethe779ac0 left promiscuous mode | |
[ 223.065003] docker0: port 6(vethe779ac0) entered disabled state | |
[ 224.763707] ipip: IPv4 and MPLS over IPv4 tunneling driver | |
[ 227.448013] docker0: port 6(vethb75fa03) entered blocking state | |
[ 227.448019] docker0: port 6(vethb75fa03) entered disabled state | |
[ 227.448077] device vethb75fa03 entered promiscuous mode | |
[ 227.878045] eth0: renamed from vethc1a763e | |
[ 227.894005] docker0: port 6(vethb75fa03) entered blocking state | |
[ 227.894010] docker0: port 6(vethb75fa03) entered forwarding state | |
[ 230.428500] docker0: port 7(veth360360f) entered blocking state | |
[ 230.428505] docker0: port 7(veth360360f) entered disabled state | |
[ 230.428548] device veth360360f entered promiscuous mode | |
[ 231.230069] eth0: renamed from veth002f52a | |
[ 231.249762] docker0: port 7(veth360360f) entered blocking state | |
[ 231.249776] docker0: port 7(veth360360f) entered forwarding state | |
[ 239.109008] docker0: port 8(veth551f867) entered blocking state | |
[ 239.109013] docker0: port 8(veth551f867) entered disabled state | |
[ 239.109069] device veth551f867 entered promiscuous mode | |
[ 239.139110] docker0: port 9(vethcd694ff) entered blocking state | |
[ 239.139114] docker0: port 9(vethcd694ff) entered disabled state | |
[ 239.139162] device vethcd694ff entered promiscuous mode | |
[ 239.139266] docker0: port 9(vethcd694ff) entered blocking state | |
[ 239.139268] docker0: port 9(vethcd694ff) entered forwarding state | |
[ 240.108830] docker0: port 9(vethcd694ff) entered disabled state | |
[ 240.701186] eth0: renamed from veth573781b | |
[ 240.726081] docker0: port 9(vethcd694ff) entered blocking state | |
[ 240.726089] docker0: port 9(vethcd694ff) entered forwarding state | |
[ 241.056848] eth0: renamed from veth9960fe9 | |
[ 241.073199] docker0: port 8(veth551f867) entered blocking state | |
[ 241.073205] docker0: port 8(veth551f867) entered forwarding state | |
[ 250.702648] docker0: port 10(veth395917c) entered blocking state | |
[ 250.702658] docker0: port 10(veth395917c) entered disabled state | |
[ 250.702830] device veth395917c entered promiscuous mode | |
[ 251.432454] eth0: renamed from vethf3a465a | |
[ 251.440359] docker0: port 10(veth395917c) entered blocking state | |
[ 251.440364] docker0: port 10(veth395917c) entered forwarding state | |
[ 275.850895] docker0: port 11(veth8ce295f) entered blocking state | |
[ 275.850900] docker0: port 11(veth8ce295f) entered disabled state | |
[ 275.850951] device veth8ce295f entered promiscuous mode | |
[ 276.090310] eth0: renamed from veth0aa0305 | |
[ 276.106200] docker0: port 11(veth8ce295f) entered blocking state | |
[ 276.106206] docker0: port 11(veth8ce295f) entered forwarding state | |
[ 280.206137] docker0: port 11(veth8ce295f) entered disabled state | |
[ 280.206263] veth0aa0305: renamed from eth0 | |
[ 280.240524] docker0: port 11(veth8ce295f) entered disabled state | |
[ 280.241174] device veth8ce295f left promiscuous mode | |
[ 280.241178] docker0: port 11(veth8ce295f) entered disabled state | |
[ 282.364781] docker0: port 11(veth6e9387e) entered blocking state | |
[ 282.364786] docker0: port 11(veth6e9387e) entered disabled state | |
[ 282.364901] device veth6e9387e entered promiscuous mode | |
[ 282.605794] eth0: renamed from veth52d3a39 | |
[ 282.621692] docker0: port 11(veth6e9387e) entered blocking state | |
[ 282.621697] docker0: port 11(veth6e9387e) entered forwarding state | |
[ 283.136309] docker0: port 11(veth6e9387e) entered disabled state | |
[ 283.136874] veth52d3a39: renamed from eth0 | |
[ 283.167483] docker0: port 11(veth6e9387e) entered disabled state | |
[ 283.168004] device veth6e9387e left promiscuous mode | |
[ 283.168008] docker0: port 11(veth6e9387e) entered disabled state | |
[ 283.969138] docker0: port 11(veth2163595) entered blocking state | |
[ 283.969142] docker0: port 11(veth2163595) entered disabled state | |
[ 283.969216] device veth2163595 entered promiscuous mode | |
[ 283.969924] docker0: port 11(veth2163595) entered blocking state | |
[ 283.969927] docker0: port 11(veth2163595) entered forwarding state | |
[ 284.077179] docker0: port 12(vethdbba819) entered blocking state | |
[ 284.077184] docker0: port 12(vethdbba819) entered disabled state | |
[ 284.077505] device vethdbba819 entered promiscuous mode | |
[ 284.078146] docker0: port 12(vethdbba819) entered blocking state | |
[ 284.078150] docker0: port 12(vethdbba819) entered forwarding state | |
[ 284.101116] docker0: port 13(veth90d3104) entered blocking state | |
[ 284.101120] docker0: port 13(veth90d3104) entered disabled state | |
[ 284.101644] device veth90d3104 entered promiscuous mode | |
[ 284.102245] docker0: port 13(veth90d3104) entered blocking state | |
[ 284.102248] docker0: port 13(veth90d3104) entered forwarding state | |
[ 284.397307] docker0: port 11(veth2163595) entered disabled state | |
[ 284.397405] docker0: port 12(vethdbba819) entered disabled state | |
[ 284.397450] docker0: port 13(veth90d3104) entered disabled state | |
[ 284.414239] eth0: renamed from vethac29a90 | |
[ 284.442930] docker0: port 11(veth2163595) entered blocking state | |
[ 284.442934] docker0: port 11(veth2163595) entered forwarding state | |
[ 284.697655] eth0: renamed from vethe9eb585 | |
[ 284.717491] docker0: port 12(vethdbba819) entered blocking state | |
[ 284.717496] docker0: port 12(vethdbba819) entered forwarding state | |
[ 284.734121] eth0: renamed from veth0690f1b | |
[ 284.750154] docker0: port 13(veth90d3104) entered blocking state | |
[ 284.750158] docker0: port 13(veth90d3104) entered forwarding state | |
[ 346.412689] docker0: port 14(veth31f2917) entered blocking state | |
[ 346.412694] docker0: port 14(veth31f2917) entered disabled state | |
[ 346.413054] device veth31f2917 entered promiscuous mode | |
[ 346.644782] eth0: renamed from veth9a7409d | |
[ 346.656799] docker0: port 14(veth31f2917) entered blocking state | |
[ 346.656806] docker0: port 14(veth31f2917) entered forwarding state | |
[ 540.604561] docker0: port 15(veth4420858) entered blocking state | |
[ 540.604565] docker0: port 15(veth4420858) entered disabled state | |
[ 540.604748] device veth4420858 entered promiscuous mode | |
[ 541.037465] eth0: renamed from veth0c65c92 | |
[ 541.057314] docker0: port 15(veth4420858) entered blocking state | |
[ 541.057320] docker0: port 15(veth4420858) entered forwarding state | |
[ 555.856654] docker0: port 15(veth4420858) entered disabled state | |
[ 555.857255] eth0-nic: renamed from eth0 | |
[ 555.907532] k6t-eth0: port 1(eth0-nic) entered blocking state | |
[ 555.907536] k6t-eth0: port 1(eth0-nic) entered disabled state | |
[ 555.907600] device eth0-nic entered promiscuous mode | |
[ 556.001164] k6t-eth0: port 2(tap0) entered blocking state | |
[ 556.001170] k6t-eth0: port 2(tap0) entered disabled state | |
[ 556.001252] device tap0 entered promiscuous mode | |
[ 556.001348] k6t-eth0: port 2(tap0) entered blocking state | |
[ 556.001351] k6t-eth0: port 2(tap0) entered forwarding state | |
[ 556.001527] k6t-eth0: port 1(eth0-nic) entered blocking state | |
[ 556.001530] k6t-eth0: port 1(eth0-nic) entered forwarding state | |
[ 556.001557] docker0: port 15(veth4420858) entered blocking state | |
[ 556.001559] docker0: port 15(veth4420858) entered forwarding state | |
[ 886.184925] k6t-eth0: port 2(tap0) entered disabled state | |
[ 887.911689] veth0c65c92: renamed from eth0 | |
[ 888.021265] device tap0 left promiscuous mode | |
[ 888.021291] k6t-eth0: port 2(tap0) entered disabled state | |
[ 888.029352] device eth0-nic left promiscuous mode | |
[ 888.029367] k6t-eth0: port 1(eth0-nic) entered disabled state | |
[ 888.045574] docker0: port 15(veth4420858) entered disabled state | |
[ 888.077529] device veth4420858 left promiscuous mode | |
[ 888.077535] docker0: port 15(veth4420858) entered disabled state | |
[ 916.413810] docker0: port 15(vetha966948) entered blocking state | |
[ 916.413815] docker0: port 15(vetha966948) entered disabled state | |
[ 916.413873] device vetha966948 entered promiscuous mode | |
[ 916.715286] eth0: renamed from vethdaacc5d | |
[ 916.731275] docker0: port 15(vetha966948) entered blocking state | |
[ 916.731280] docker0: port 15(vetha966948) entered forwarding state | |
[ 927.323294] docker0: port 15(vetha966948) entered disabled state | |
[ 927.338227] vethdaacc5d: renamed from eth0 | |
[ 927.372610] docker0: port 15(vetha966948) entered disabled state | |
[ 927.373203] device vetha966948 left promiscuous mode | |
[ 927.373207] docker0: port 15(vetha966948) entered disabled state | |
[ 937.520625] veth9a7409d: renamed from eth0 | |
[ 937.533585] docker0: port 14(veth31f2917) entered disabled state | |
[ 937.554770] docker0: port 14(veth31f2917) entered disabled state | |
[ 937.555761] device veth31f2917 left promiscuous mode | |
[ 937.555766] docker0: port 14(veth31f2917) entered disabled state |
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Thu 2022-06-16 07:26:56 UTC, end at Mon 2022-06-20 00:25:18 UTC. -- | |
Jun 16 07:26:56 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:56 fv-az72-309 provisioner[1638]: Factory script /opt/post-generation/cleanup-logs.sh has finished | |
Jun 16 07:26:56 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:56 fv-az72-309 provisioner[1638]: Invoke factory script /opt/post-generation/environment-variables.sh | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Factory script /opt/post-generation/environment-variables.sh has finished | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Vacuuming done, freed 0B of archived journals from /var/log/journal. | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Vacuuming done, freed 0B of archived journals from /run/log/journal. | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Deleted archived journal /var/log/journal/3d8d945fc71147a483fa20cb6792de9d/system@00000000000000000000000000000000-000000000000256a-0005e16fa65af853.journal (8.0M). | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Deleted archived journal /var/log/journal/3d8d945fc71147a483fa20cb6792de9d/user-1000@00000000000000000000000000000000-000000000000257a-0005e16fa6698e8b.journal (8.0M). | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Deleted archived journal /var/log/journal/3d8d945fc71147a483fa20cb6792de9d/system@00000000000000000000000000000000-0000000000002a24-0005e18b85a906d3.journal (8.0M). | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Vacuuming done, freed 24.0M of archived journals from /var/log/journal/3d8d945fc71147a483fa20cb6792de9d. | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Downloading abuse tools to /opt/runner/provisioner/etc | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Archive: /opt/runner/provisioner/etc/tools.zip | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: inflating: /opt/runner/provisioner/etc/jobkeepalive | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: inflating: /opt/runner/provisioner/etc/provjobd | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Finished installing abuse tools | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: --2022-06-16 07:26:55-- https://abusetoolscdn.blob.core.windows.net/binaries/v0.40-linux | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Resolving abusetoolscdn.blob.core.windows.net (abusetoolscdn.blob.core.windows.net)... 52.239.170.164, 20.150.90.4, 52.239.171.4 | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Connecting to abusetoolscdn.blob.core.windows.net (abusetoolscdn.blob.core.windows.net)|52.239.170.164|:443... connected. | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: HTTP request sent, awaiting response... 200 OK | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Length: 3104000 (3.0M) [application/octet-stream] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: Saving to: ‘/opt/runner/provisioner/etc/tools.zip’ | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 0K .......... .......... .......... .......... .......... 1% 37.4M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 50K .......... .......... .......... .......... .......... 3% 73.1M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 100K .......... .......... .......... .......... .......... 4% 33.4M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 150K .......... .......... .......... .......... .......... 6% 55.6M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 200K .......... .......... .......... .......... .......... 8% 264M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 250K .......... .......... .......... .......... .......... 9% 51.8M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 300K .......... .......... .......... .......... .......... 11% 265M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 350K .......... .......... .......... .......... .......... 13% 45.4M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 400K .......... .......... .......... .......... .......... 14% 264M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 450K .......... .......... .......... .......... .......... 16% 47.0M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 500K .......... .......... .......... .......... .......... 18% 256M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 550K .......... .......... .......... .......... .......... 19% 235M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 600K .......... .......... .......... .......... .......... 21% 74.3M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 650K .......... .......... .......... .......... .......... 23% 258M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 700K .......... .......... .......... .......... .......... 24% 266M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 750K .......... .......... .......... .......... .......... 26% 219M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 800K .......... .......... .......... .......... .......... 28% 93.3M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 850K .......... .......... .......... .......... .......... 29% 233M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 900K .......... .......... .......... .......... .......... 31% 106M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 950K .......... .......... .......... .......... .......... 32% 232M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1000K .......... .......... .......... .......... .......... 34% 250M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1050K .......... .......... .......... .......... .......... 36% 233M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1100K .......... .......... .......... .......... .......... 37% 268M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1150K .......... .......... .......... .......... .......... 39% 218M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1200K .......... .......... .......... .......... .......... 41% 53.2M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1250K .......... .......... .......... .......... .......... 42% 118M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1300K .......... .......... .......... .......... .......... 44% 122M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1350K .......... .......... .......... .......... .......... 46% 240M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1400K .......... .......... .......... .......... .......... 47% 273M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1450K .......... .......... .......... .......... .......... 49% 135M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1500K .......... .......... .......... .......... .......... 51% 253M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1550K .......... .......... .......... .......... .......... 52% 227M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1600K .......... .......... .......... .......... .......... 54% 154M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1650K .......... .......... .......... .......... .......... 56% 252M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1700K .......... .......... .......... .......... .......... 57% 246M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1750K .......... .......... .......... .......... .......... 59% 245M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1800K .......... .......... .......... .......... .......... 61% 140M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1850K .......... .......... .......... .......... .......... 62% 274M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1900K .......... .......... .......... .......... .......... 64% 259M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 1950K .......... .......... .......... .......... .......... 65% 131M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2000K .......... .......... .......... .......... .......... 67% 276M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2050K .......... .......... .......... .......... .......... 69% 264M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2100K .......... .......... .......... .......... .......... 70% 156M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2150K .......... .......... .......... .......... .......... 72% 243M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2200K .......... .......... .......... .......... .......... 74% 248M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2250K .......... .......... .......... .......... .......... 75% 133M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2300K .......... .......... .......... .......... .......... 77% 222M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2350K .......... .......... .......... .......... .......... 79% 226M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2400K .......... .......... .......... .......... .......... 80% 163M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2450K .......... .......... .......... .......... .......... 82% 256M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2500K .......... .......... .......... .......... .......... 84% 234M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2550K .......... .......... .......... .......... .......... 85% 235M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2600K .......... .......... .......... .......... .......... 87% 164M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2650K .......... .......... .......... .......... .......... 89% 277M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2700K .......... .......... .......... .......... .......... 90% 270M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2750K .......... .......... .......... .......... .......... 92% 144M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2800K .......... .......... .......... .......... .......... 94% 209M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2850K .......... .......... .......... .......... .......... 95% 276M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2900K .......... .......... .......... .......... .......... 97% 283M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2950K .......... .......... .......... .......... .......... 98% 249M 0s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 3000K .......... .......... .......... . 100% 313M=0.02s | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: 2022-06-16 07:26:55 (139 MB/s) - ‘/opt/runner/provisioner/etc/tools.zip’ saved [3104000/3104000] | |
Jun 16 07:26:57 fv-az72-309 provisioner[1638]: [07:26:56] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:58 fv-az72-309 provisioner[1638]: [07:26:58] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:58 fv-az72-309 provisioner[1638]: INSTALL_OS_TOOL empty; skipping... | |
Jun 16 07:26:58 fv-az72-309 provisioner[1638]: [07:26:58] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:58 fv-az72-309 provisioner[1638]: RUNNER_TOOL_CACHE=/opt/hostedtoolcache | |
Jun 16 07:26:58 fv-az72-309 provisioner[1638]: [07:26:58] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:58 fv-az72-309 provisioner[1638]: RUNNER_TOOL_CACHE set to match AGENT_TOOLSDIRECTORY: /opt/hostedtoolcache | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: + ImageName=Ubuntu20 | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: + [[ Ubuntu20 = *Ubuntu* ]] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: ++ grep ResourceDisk.EnableSwap=y | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: ++ cat /etc/waagent.conf | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: + isMntSwap=ResourceDisk.EnableSwap=y | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: + '[' -z ResourceDisk.EnableSwap=y ']' | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: [07:26:59] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:26:59 fv-az72-309 provisioner[1638]: + apt-get install -y libcgroup1 cgroup-tools | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: [07:27:05] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: WARNING! Using --password via the CLI is insecure. Use --password-stdin. | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: [07:27:05] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: WARNING! Your password will be stored unencrypted in /home/runner/.docker/config.json. | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: [07:27:05] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: Configure a credential helper to remove this warning. See | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: [07:27:05] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: https://docs.docker.com/engine/reference/commandline/login/#credentials-store | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: [07:27:05] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: [07:27:05] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:05 fv-az72-309 provisioner[1638]: Login Succeeded | |
Jun 16 07:27:05 fv-az72-309 sudo[1681]: pam_unix(sudo:session): session closed for user runner | |
Jun 16 07:27:07 fv-az72-309 provisioner[1638]: [07:27:07] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:07 fv-az72-309 provisioner[1638]: Reading package lists... | |
Jun 16 07:27:07 fv-az72-309 provisioner[1638]: [07:27:07] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:07 fv-az72-309 provisioner[1638]: Building dependency tree... | |
Jun 16 07:27:07 fv-az72-309 provisioner[1638]: [07:27:07] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:07 fv-az72-309 provisioner[1638]: Reading state information... | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: The following NEW packages will be installed: | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: cgroup-tools libcgroup1 | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: 0 upgraded, 2 newly installed, 0 to remove and 13 not upgraded. | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Need to get 109 kB of archives. | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: After this operation, 472 kB of additional disk space will be used. | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Get:1 http://azure.archive.ubuntu.com/ubuntu focal/universe amd64 libcgroup1 amd64 0.41-10 [42.9 kB] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Get:2 http://azure.archive.ubuntu.com/ubuntu focal/universe amd64 cgroup-tools amd64 0.41-10 [66.2 kB] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Provisioner.Framework.JobRunner[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Starting job Machine Info Monitor | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Microsoft.AzureDevOps.Provisioner.Framework.Monitoring.MachineInfoMonitorJob[7000] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: {"Processor":"0","VendorId":"GenuineIntel","CpuFamily":"6","Model":"63","ModelName":"Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz","Stepping":"2","CpuMHz":"2397.223","CacheSize":"30720 KB","CpuCores":"2"} | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Provisioner.Framework.JobRunner[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Finished job Machine Info Monitor | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Provisioner.Framework.JobRunner[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Starting job Machine Health Monitor | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] fail: Microsoft.AzureDevOps.Provisioner.Framework.Monitoring.MachineHealthMonitorJob[1007] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Exception during the attempt to publish machine metrics to Mms. | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: System.ArgumentException: The collection must contain at least one element. (Parameter 'bytes') | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: at GitHub.Services.Common.ArgumentUtility.CheckEnumerableForNullOrEmpty(IEnumerable enumerable, String enumerableName, String expectedServiceArea) in /home/vsts/work/1/s/mms.client/Common/Utility/ArgumentUtility.cs:line 194 | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: at GitHub.Services.Common.ArgumentUtility.CheckEnumerableForNullOrEmpty(IEnumerable enumerable, String enumerableName) in /home/vsts/work/1/s/mms.client/Common/Utility/ArgumentUtility.cs:line 178 | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: at GitHub.Services.Common.PrimitiveExtensions.ToBase64StringNoPadding(Byte[] bytes) in /home/vsts/work/1/s/mms.client/Common/Utility/PrimitiveExtensions.cs:line 53 | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: at MachineManagement.Provisioning.MachineManagementClient.PublishMetricsAsync(String poolName, String instanceName, MachineMetric[] metrics, Byte[] postRegistrationAccessToken, CancellationToken cancellationToken) in /home/vsts/work/1/s/provisioner.framework/MachineManagementClient.cs:line 46 | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: at Microsoft.AzureDevOps.Provisioner.Framework.Monitoring.MachineHealthMonitorJob.PublishSampleMetrics(CancellationToken cancellationToken) in /home/vsts/work/1/s/provisioner.framework/Monitoring/Jobs/MachineHealthMonitorJob.cs:line 75 | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Microsoft.AzureDevOps.Provisioner.Framework.Monitoring.MachineHealthMonitorJob[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Machine is healthy. | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Provisioner.Framework.JobRunner[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Finished job Machine Health Monitor | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: Provisioner.Framework.JobRunner[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Job Machine Health Monitor is scheduled to run in 300 seconds | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: [07:27:08] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:08 fv-az72-309 provisioner[1638]: Fetched 109 kB in 0s (1299 kB/s) | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: Selecting previously unselected package libcgroup1:amd64. | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 5% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 10% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 15% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 20% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 25% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 30% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 35% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 40% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 45% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 50% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 55% | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: [07:27:09] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:09 fv-az72-309 provisioner[1638]: (Reading database ... 60% | |
Jun 16 07:27:10 fv-az72-309 provisioner[1638]: [07:27:10] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:10 fv-az72-309 provisioner[1638]: (Reading database ... 65% | |
Jun 16 07:27:10 fv-az72-309 provisioner[1638]: [07:27:10] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:10 fv-az72-309 provisioner[1638]: (Reading database ... 70% | |
Jun 16 07:27:11 fv-az72-309 provisioner[1638]: [07:27:11] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:11 fv-az72-309 provisioner[1638]: (Reading database ... 75% | |
Jun 16 07:27:11 fv-az72-309 provisioner[1638]: [07:27:11] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:11 fv-az72-309 provisioner[1638]: (Reading database ... 80% | |
Jun 16 07:27:12 fv-az72-309 provisioner[1638]: [07:27:12] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:12 fv-az72-309 provisioner[1638]: (Reading database ... 85% | |
Jun 16 07:27:12 fv-az72-309 provisioner[1638]: [07:27:12] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:12 fv-az72-309 provisioner[1638]: (Reading database ... 90% | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: (Reading database ... 95% | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: (Reading database ... 100% | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: (Reading database ... 231549 files and directories currently installed.) | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: Preparing to unpack .../libcgroup1_0.41-10_amd64.deb ... | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: Unpacking libcgroup1:amd64 (0.41-10) ... | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: Selecting previously unselected package cgroup-tools. | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: Preparing to unpack .../cgroup-tools_0.41-10_amd64.deb ... | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: Unpacking cgroup-tools (0.41-10) ... | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: [07:27:13] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:13 fv-az72-309 provisioner[1638]: Setting up libcgroup1:amd64 (0.41-10) ... | |
Jun 16 07:27:14 fv-az72-309 provisioner[1638]: [07:27:14] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:14 fv-az72-309 provisioner[1638]: Setting up cgroup-tools (0.41-10) ... | |
Jun 16 07:27:14 fv-az72-309 provisioner[1638]: [07:27:14] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:14 fv-az72-309 provisioner[1638]: Processing triggers for libc-bin (2.31-0ubuntu9.9) ... | |
Jun 16 07:27:16 fv-az72-309 provisioner[1638]: [07:27:16] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:16 fv-az72-309 provisioner[1638]: Processing triggers for man-db (2.9.1-1) ... | |
Jun 16 07:27:18 fv-az72-309 dbus-daemon[661]: [system] Activating via systemd: service name='org.freedesktop.PackageKit' unit='packagekit.service' requested by ':1.14' (uid=0 pid=2069 comm="/usr/bin/gdbus call --system --dest org.freedeskto" label="unconfined") | |
Jun 16 07:27:18 fv-az72-309 systemd[1]: Starting PackageKit Daemon... | |
Jun 16 07:27:18 fv-az72-309 PackageKit[2072]: daemon start | |
Jun 16 07:27:18 fv-az72-309 dbus-daemon[661]: [system] Successfully activated service 'org.freedesktop.PackageKit' | |
Jun 16 07:27:18 fv-az72-309 systemd[1]: Started PackageKit Daemon. | |
Jun 16 07:27:20 fv-az72-309 provisioner[1638]: [07:27:20] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:20 fv-az72-309 provisioner[1638]: + apt-get install -y libcgroup1 cgroup-tools | |
Jun 16 07:27:20 fv-az72-309 provisioner[1638]: [07:27:20] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:20 fv-az72-309 provisioner[1638]: Reading package lists... | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: Building dependency tree... | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: Reading state information... | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: cgroup-tools is already the newest version (0.41-10). | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: libcgroup1 is already the newest version (0.41-10). | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: 0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded. | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: ++ command -v docker | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + '[' '!' -x /usr/bin/docker ']' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: ++ grep MemTotal /proc/meminfo | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: ++ awk '{print $2}' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + mem_total_in_bytes=7283838976 | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + mem_total_minus1g_in_bytes=6210097152 | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: ++ grep SwapTotal /proc/meminfo | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: ++ awk '{print $2}' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + swap_total_in_bytes=4294963200 | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + total_in_bytes=11578802176 | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + total_minus2g_in_bytes=9431318528 | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + actions_runner_cgroup='group actions_runner { memory { } }' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + actions_job_cgroup='group actions_job { memory { memory.limit_in_bytes = 6210097152; memory.memsw.limit_in_bytes = 9431318528; } }' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'group actions_runner { memory { } }' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'group actions_job { memory { memory.limit_in_bytes = 6210097152; memory.memsw.limit_in_bytes = 9431318528; } }' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'root:provisioner memory actions_runner' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'runner:Runner.Listener memory actions_runner' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'runner:Runner.Worker memory actions_runner' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'runner memory actions_job' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + '[' '!' -f /etc/docker/daemon.json ']' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo '{ "exec-opts": ["native.cgroupdriver=cgroupfs"], "cgroup-parent": "/actions_job" }' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + echo 'GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"' | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: [07:27:21] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:21 fv-az72-309 provisioner[1638]: + update-grub | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Sourcing file `/etc/default/grub' | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Sourcing file `/etc/default/grub.d/40-force-partuuid.cfg' | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Sourcing file `/etc/default/grub.d/40-runner.cfg' | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Sourcing file `/etc/default/grub.d/50-cloudimg-settings.cfg' | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Sourcing file `/etc/default/grub.d/init-select.cfg' | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Generating grub configuration file ... | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: GRUB_FORCE_PARTUUID is set, will attempt initrdless boot | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Found linux image: /boot/vmlinuz-5.13.0-1029-azure | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: [07:27:27] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:27 fv-az72-309 provisioner[1638]: Found initrd image: /boot/initrd.img-5.13.0-1029-azure | |
Jun 16 07:27:28 fv-az72-309 kernel: SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: avx2x4 gen() 424996 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: avx2x4 xor() 135157 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: avx2x2 gen() 392111 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: avx2x2 xor() 239587 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: avx2x1 gen() 373907 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: avx2x1 xor() 219432 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: sse2x4 gen() 215555 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: sse2x4 xor() 128859 MB/s | |
Jun 16 07:27:28 fv-az72-309 kernel: raid6: sse2x2 gen() 228099 MB/s | |
Jun 16 07:27:29 fv-az72-309 kernel: raid6: sse2x2 xor() 131971 MB/s | |
Jun 16 07:27:29 fv-az72-309 kernel: raid6: sse2x1 gen() 167466 MB/s | |
Jun 16 07:27:29 fv-az72-309 kernel: raid6: sse2x1 xor() 88530 MB/s | |
Jun 16 07:27:29 fv-az72-309 kernel: raid6: using algorithm avx2x4 gen() 18784 MB/s | |
Jun 16 07:27:29 fv-az72-309 kernel: raid6: .... xor() 135157 MB/s, rmw enabled | |
Jun 16 07:27:29 fv-az72-309 kernel: raid6: using avx2x2 recovery algorithm | |
Jun 16 07:27:29 fv-az72-309 kernel: xor: automatically using best checksumming function avx | |
Jun 16 07:27:29 fv-az72-309 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=yes | |
Jun 16 07:27:29 fv-az72-309 provisioner[1638]: [07:27:29] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:29 fv-az72-309 provisioner[1638]: File descriptor 8 (socket:[32650]) leaked on lvs invocation. Parent PID 2736: /bin/sh | |
Jun 16 07:27:29 fv-az72-309 os-prober[2755]: debug: running /usr/lib/os-probes/mounted/05efi on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 05efi[2757]: debug: Not on UEFI platform | |
Jun 16 07:27:29 fv-az72-309 os-prober[2758]: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 10freedos[2760]: debug: /dev/sda1 is not a FAT partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2761]: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 10qnx[2763]: debug: /dev/sda1 is not a QNX4 partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2764]: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 macosx-prober[2766]: debug: /dev/sda1 is not an HFS+ partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2767]: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 20microsoft[2769]: debug: /dev/sda1 is not a MS partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2770]: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 30utility[2772]: debug: /dev/sda1 is not a FAT partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2773]: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2775]: debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2777]: debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2779]: debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 83haiku[2781]: debug: /dev/sda1 is not a BeFS partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2782]: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2786]: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/sda1 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2793]: debug: running /usr/lib/os-probes/50mounted-tests on /dev/sdb1 | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2813]: debug: mounted using GRUB ext2 filesystem driver | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2814]: debug: running subtest /usr/lib/os-probes/mounted/05efi | |
Jun 16 07:27:29 fv-az72-309 05efi[2816]: debug: Not on UEFI platform | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2817]: debug: running subtest /usr/lib/os-probes/mounted/10freedos | |
Jun 16 07:27:29 fv-az72-309 10freedos[2819]: debug: /dev/sdb1 is not a FAT partition: exiting | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2820]: debug: running subtest /usr/lib/os-probes/mounted/10qnx | |
Jun 16 07:27:29 fv-az72-309 10qnx[2822]: debug: /dev/sdb1 is not a QNX4 partition: exiting | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2823]: debug: running subtest /usr/lib/os-probes/mounted/20macosx | |
Jun 16 07:27:29 fv-az72-309 macosx-prober[2825]: debug: /dev/sdb1 is not an HFS+ partition: exiting | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2826]: debug: running subtest /usr/lib/os-probes/mounted/20microsoft | |
Jun 16 07:27:29 fv-az72-309 20microsoft[2828]: debug: /dev/sdb1 is not a MS partition: exiting | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2829]: debug: running subtest /usr/lib/os-probes/mounted/30utility | |
Jun 16 07:27:29 fv-az72-309 30utility[2831]: debug: /dev/sdb1 is not a FAT partition: exiting | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2832]: debug: running subtest /usr/lib/os-probes/mounted/40lsb | |
Jun 16 07:27:29 fv-az72-309 40lsb[2858]: result: /dev/sdb1:Ubuntu 20.04.4 LTS (20.04):Ubuntu:linux | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2859]: debug: os found by subtest /usr/lib/os-probes/mounted/40lsb | |
Jun 16 07:27:29 fv-az72-309 systemd[1]: var-lib-os\x2dprober-mount.mount: Succeeded. | |
Jun 16 07:27:29 fv-az72-309 os-prober[2862]: debug: os detected by /usr/lib/os-probes/50mounted-tests | |
Jun 16 07:27:29 fv-az72-309 os-prober[2868]: debug: running /usr/lib/os-probes/50mounted-tests on /dev/sdb14 | |
Jun 16 07:27:29 fv-az72-309 50mounted-tests[2876]: debug: /dev/sdb14 type not recognised; skipping | |
Jun 16 07:27:29 fv-az72-309 os-prober[2877]: debug: os detected by /usr/lib/os-probes/50mounted-tests | |
Jun 16 07:27:29 fv-az72-309 os-prober[2894]: debug: running /usr/lib/os-probes/mounted/05efi on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 05efi[2896]: debug: Not on UEFI platform | |
Jun 16 07:27:29 fv-az72-309 os-prober[2897]: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 10freedos[2899]: debug: /dev/sdb15 is a FAT32 partition | |
Jun 16 07:27:29 fv-az72-309 os-prober[2902]: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 10qnx[2904]: debug: /dev/sdb15 is not a QNX4 partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2905]: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 macosx-prober[2907]: debug: /dev/sdb15 is not an HFS+ partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2908]: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 20microsoft[2910]: debug: /dev/sdb15 is a FAT32 partition | |
Jun 16 07:27:29 fv-az72-309 os-prober[2919]: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 30utility[2921]: debug: /dev/sdb15 is a FAT32 partition | |
Jun 16 07:27:29 fv-az72-309 os-prober[2926]: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2928]: debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2930]: debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2932]: debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 83haiku[2934]: debug: /dev/sdb15 is not a BeFS partition: exiting | |
Jun 16 07:27:29 fv-az72-309 os-prober[2935]: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/sdb15 | |
Jun 16 07:27:29 fv-az72-309 os-prober[2939]: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/sdb15 | |
Jun 16 07:27:30 fv-az72-309 provisioner[1638]: [07:27:30] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:30 fv-az72-309 provisioner[1638]: Found Ubuntu 20.04.4 LTS (20.04) on /dev/sdb1 | |
Jun 16 07:27:30 fv-az72-309 linux-boot-prober[2999]: debug: running /usr/lib/linux-boot-probes/50mounted-tests | |
Jun 16 07:27:30 fv-az72-309 50mounted-tests[3022]: debug: running /usr/lib/linux-boot-probes/mounted/40grub /dev/sdb1 /dev/sdb1 /var/lib/os-prober/mount ext2 | |
Jun 16 07:27:30 fv-az72-309 50mounted-tests[3024]: debug: running /usr/lib/linux-boot-probes/mounted/40grub2 /dev/sdb1 /dev/sdb1 /var/lib/os-prober/mount ext2 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3027]: debug: parsing grub.cfg | |
Jun 16 07:27:30 fv-az72-309 40grub2[3028]: debug: parsing: # | |
Jun 16 07:27:30 fv-az72-309 40grub2[3029]: debug: parsing: # DO NOT EDIT THIS FILE | |
Jun 16 07:27:30 fv-az72-309 40grub2[3030]: debug: parsing: # | |
Jun 16 07:27:30 fv-az72-309 40grub2[3031]: debug: parsing: # It is automatically generated by grub-mkconfig using templates | |
Jun 16 07:27:30 fv-az72-309 40grub2[3032]: debug: parsing: # from /etc/grub.d and settings from /etc/default/grub | |
Jun 16 07:27:30 fv-az72-309 40grub2[3033]: debug: parsing: # | |
Jun 16 07:27:30 fv-az72-309 40grub2[3034]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3035]: debug: parsing: ### BEGIN /etc/grub.d/00_header ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3036]: debug: parsing: if [ -s $prefix/grubenv ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3037]: debug: parsing: set have_grubenv=true | |
Jun 16 07:27:30 fv-az72-309 40grub2[3038]: debug: parsing: load_env | |
Jun 16 07:27:30 fv-az72-309 40grub2[3039]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3040]: debug: parsing: if [ "${initrdfail}" = 2 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3041]: debug: parsing: set initrdfail= | |
Jun 16 07:27:30 fv-az72-309 40grub2[3042]: debug: parsing: elif [ "${initrdfail}" = 1 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3043]: debug: parsing: set next_entry="${prev_entry}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3044]: debug: parsing: set prev_entry= | |
Jun 16 07:27:30 fv-az72-309 40grub2[3045]: debug: parsing: save_env prev_entry | |
Jun 16 07:27:30 fv-az72-309 40grub2[3046]: debug: parsing: if [ "${next_entry}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3047]: debug: parsing: set initrdfail=2 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3048]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3049]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3050]: debug: parsing: if [ "${next_entry}" ] ; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3051]: debug: parsing: set default="${next_entry}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3052]: debug: parsing: set next_entry= | |
Jun 16 07:27:30 fv-az72-309 40grub2[3053]: debug: parsing: save_env next_entry | |
Jun 16 07:27:30 fv-az72-309 40grub2[3054]: debug: parsing: set boot_once=true | |
Jun 16 07:27:30 fv-az72-309 40grub2[3055]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3056]: debug: parsing: set default="0" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3057]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3058]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3059]: debug: parsing: if [ x"${feature_menuentry_id}" = xy ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3060]: debug: parsing: menuentry_id_option="--id" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3061]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3062]: debug: parsing: menuentry_id_option="" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3063]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3064]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3065]: debug: parsing: export menuentry_id_option | |
Jun 16 07:27:30 fv-az72-309 40grub2[3066]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3067]: debug: parsing: if [ "${prev_saved_entry}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3068]: debug: parsing: set saved_entry="${prev_saved_entry}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3069]: debug: parsing: save_env saved_entry | |
Jun 16 07:27:30 fv-az72-309 40grub2[3070]: debug: parsing: set prev_saved_entry= | |
Jun 16 07:27:30 fv-az72-309 40grub2[3071]: debug: parsing: save_env prev_saved_entry | |
Jun 16 07:27:30 fv-az72-309 40grub2[3072]: debug: parsing: set boot_once=true | |
Jun 16 07:27:30 fv-az72-309 40grub2[3073]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3074]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3075]: debug: parsing: function savedefault { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3076]: debug: parsing: if [ -z "${boot_once}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3077]: debug: parsing: saved_entry="${chosen}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3078]: debug: parsing: save_env saved_entry | |
Jun 16 07:27:30 fv-az72-309 40grub2[3079]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3080]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3081]: debug: parsing: function initrdfail { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3082]: debug: parsing: if [ -n "${have_grubenv}" ]; then if [ -n "${partuuid}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3083]: debug: parsing: if [ -z "${initrdfail}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3084]: debug: parsing: set initrdfail=1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3085]: debug: parsing: if [ -n "${boot_once}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3086]: debug: parsing: set prev_entry="${default}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3087]: debug: parsing: save_env prev_entry | |
Jun 16 07:27:30 fv-az72-309 40grub2[3088]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3089]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3090]: debug: parsing: save_env initrdfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3091]: debug: parsing: fi; fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3092]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3093]: debug: parsing: function recordfail { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3094]: debug: parsing: set recordfail=1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3095]: debug: parsing: if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3096]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3097]: debug: parsing: function load_video { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3098]: debug: parsing: if [ x$feature_all_video_module = xy ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3099]: debug: parsing: insmod all_video | |
Jun 16 07:27:30 fv-az72-309 40grub2[3100]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3101]: debug: parsing: insmod efi_gop | |
Jun 16 07:27:30 fv-az72-309 40grub2[3102]: debug: parsing: insmod efi_uga | |
Jun 16 07:27:30 fv-az72-309 40grub2[3103]: debug: parsing: insmod ieee1275_fb | |
Jun 16 07:27:30 fv-az72-309 40grub2[3104]: debug: parsing: insmod vbe | |
Jun 16 07:27:30 fv-az72-309 40grub2[3105]: debug: parsing: insmod vga | |
Jun 16 07:27:30 fv-az72-309 40grub2[3106]: debug: parsing: insmod video_bochs | |
Jun 16 07:27:30 fv-az72-309 40grub2[3107]: debug: parsing: insmod video_cirrus | |
Jun 16 07:27:30 fv-az72-309 40grub2[3108]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3109]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3110]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3111]: debug: parsing: serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3112]: debug: parsing: terminal_input serial | |
Jun 16 07:27:30 fv-az72-309 40grub2[3113]: debug: parsing: terminal_output serial | |
Jun 16 07:27:30 fv-az72-309 40grub2[3114]: debug: parsing: if [ "${recordfail}" = 1 ] ; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3115]: debug: parsing: set timeout=30 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3116]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3117]: debug: parsing: if [ x$feature_timeout_style = xy ] ; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3118]: debug: parsing: set timeout_style=countdown | |
Jun 16 07:27:30 fv-az72-309 40grub2[3119]: debug: parsing: set timeout=1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3120]: debug: parsing: # Fallback hidden-timeout code in case the timeout_style feature is | |
Jun 16 07:27:30 fv-az72-309 40grub2[3121]: debug: parsing: # unavailable. | |
Jun 16 07:27:30 fv-az72-309 40grub2[3122]: debug: parsing: elif sleep --interruptible 1 ; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3123]: debug: parsing: set timeout=0 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3124]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3125]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3126]: debug: parsing: ### END /etc/grub.d/00_header ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3127]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3128]: debug: parsing: ### BEGIN /etc/grub.d/01_track_initrdless_boot_fallback ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3129]: debug: parsing: if [ -n "${have_grubenv}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3130]: debug: parsing: if [ -n "${initrdfail}" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3131]: debug: parsing: set initrdless_boot_fallback_triggered="${initrdfail}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3132]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3133]: debug: parsing: unset initrdless_boot_fallback_triggered | |
Jun 16 07:27:30 fv-az72-309 40grub2[3134]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3135]: debug: parsing: save_env initrdless_boot_fallback_triggered | |
Jun 16 07:27:30 fv-az72-309 40grub2[3136]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3137]: debug: parsing: ### END /etc/grub.d/01_track_initrdless_boot_fallback ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3138]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3139]: debug: parsing: ### BEGIN /etc/grub.d/05_debian_theme ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3140]: debug: parsing: set menu_color_normal=white/black | |
Jun 16 07:27:30 fv-az72-309 40grub2[3141]: debug: parsing: set menu_color_highlight=black/light-gray | |
Jun 16 07:27:30 fv-az72-309 40grub2[3142]: debug: parsing: ### END /etc/grub.d/05_debian_theme ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3143]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3144]: debug: parsing: ### BEGIN /etc/grub.d/10_linux ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3145]: debug: parsing: # | |
Jun 16 07:27:30 fv-az72-309 40grub2[3146]: debug: parsing: # GRUB_FORCE_PARTUUID is set, will attempt initrdless boot | |
Jun 16 07:27:30 fv-az72-309 40grub2[3147]: debug: parsing: # Upon panic fallback to booting with initrd | |
Jun 16 07:27:30 fv-az72-309 40grub2[3148]: debug: parsing: set partuuid=2b1f5b8e-4041-4065-af1c-792f94a6d205 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3149]: debug: parsing: function gfxmode { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3150]: debug: parsing: set gfxpayload="${1}" | |
Jun 16 07:27:30 fv-az72-309 40grub2[3151]: debug: parsing: if [ "${1}" = "keep" ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3152]: debug: parsing: set vt_handoff=vt.handoff=7 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3153]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3154]: debug: parsing: set vt_handoff= | |
Jun 16 07:27:30 fv-az72-309 40grub2[3155]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3156]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3157]: debug: parsing: if [ "${recordfail}" != 1 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3158]: debug: parsing: if [ -e ${prefix}/gfxblacklist.txt ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3159]: debug: parsing: if [ ${grub_platform} != pc ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3160]: debug: parsing: set linux_gfx_mode=keep | |
Jun 16 07:27:30 fv-az72-309 40grub2[3161]: debug: parsing: elif hwmatch ${prefix}/gfxblacklist.txt 3; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3162]: debug: parsing: if [ ${match} = 0 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3163]: debug: parsing: set linux_gfx_mode=keep | |
Jun 16 07:27:30 fv-az72-309 40grub2[3164]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3165]: debug: parsing: set linux_gfx_mode=text | |
Jun 16 07:27:30 fv-az72-309 40grub2[3166]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3167]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3168]: debug: parsing: set linux_gfx_mode=text | |
Jun 16 07:27:30 fv-az72-309 40grub2[3169]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3170]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3171]: debug: parsing: set linux_gfx_mode=keep | |
Jun 16 07:27:30 fv-az72-309 40grub2[3172]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3173]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3174]: debug: parsing: set linux_gfx_mode=text | |
Jun 16 07:27:30 fv-az72-309 40grub2[3175]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3176]: debug: parsing: export linux_gfx_mode | |
Jun 16 07:27:30 fv-az72-309 40grub2[3177]: debug: parsing: menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-3b66cf28-8c39-478b-82b0-294032b5bd9d' { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3188]: debug: parsing: recordfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3189]: debug: parsing: load_video | |
Jun 16 07:27:30 fv-az72-309 40grub2[3190]: debug: parsing: gfxmode $linux_gfx_mode | |
Jun 16 07:27:30 fv-az72-309 40grub2[3191]: debug: parsing: insmod gzio | |
Jun 16 07:27:30 fv-az72-309 40grub2[3192]: debug: parsing: if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3193]: debug: parsing: insmod part_gpt | |
Jun 16 07:27:30 fv-az72-309 40grub2[3194]: debug: parsing: insmod ext2 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3195]: debug: parsing: if [ x$feature_platform_search_hint = xy ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3196]: debug: parsing: search --no-floppy --fs-uuid --set=root 3b66cf28-8c39-478b-82b0-294032b5bd9d | |
Jun 16 07:27:30 fv-az72-309 40grub2[3197]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3198]: debug: parsing: search --no-floppy --fs-uuid --set=root 3b66cf28-8c39-478b-82b0-294032b5bd9d | |
Jun 16 07:27:30 fv-az72-309 40grub2[3199]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3200]: debug: parsing: if [ "${initrdfail}" = 1 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3201]: debug: parsing: echo 'GRUB_FORCE_PARTUUID set, initrdless boot failed. Attempting with initrd.' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3202]: debug: parsing: linux /boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3206]: debug: parsing: initrd /boot/initrd.img-5.13.0-1029-azure | |
Jun 16 07:27:30 fv-az72-309 40grub2[3210]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3211]: debug: parsing: echo 'GRUB_FORCE_PARTUUID set, attempting initrdless boot.' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3212]: debug: parsing: linux /boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3216]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3217]: debug: parsing: initrdfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3218]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3219]: result: /dev/sdb1:/dev/sdb1:Ubuntu:/boot/vmlinuz-5.13.0-1029-azure:/boot/initrd.img-5.13.0-1029-azure:root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3220]: debug: parsing: submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-3b66cf28-8c39-478b-82b0-294032b5bd9d' { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3221]: debug: parsing: menuentry 'Ubuntu, with Linux 5.13.0-1029-azure' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.13.0-1029-azure-advanced-3b66cf28-8c39-478b-82b0-294032b5bd9d' { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3232]: debug: parsing: recordfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3233]: debug: parsing: load_video | |
Jun 16 07:27:30 fv-az72-309 40grub2[3234]: debug: parsing: gfxmode $linux_gfx_mode | |
Jun 16 07:27:30 fv-az72-309 40grub2[3235]: debug: parsing: insmod gzio | |
Jun 16 07:27:30 fv-az72-309 40grub2[3236]: debug: parsing: if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3237]: debug: parsing: insmod part_gpt | |
Jun 16 07:27:30 fv-az72-309 40grub2[3238]: debug: parsing: insmod ext2 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3239]: debug: parsing: if [ x$feature_platform_search_hint = xy ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3240]: debug: parsing: search --no-floppy --fs-uuid --set=root 3b66cf28-8c39-478b-82b0-294032b5bd9d | |
Jun 16 07:27:30 fv-az72-309 40grub2[3241]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3242]: debug: parsing: search --no-floppy --fs-uuid --set=root 3b66cf28-8c39-478b-82b0-294032b5bd9d | |
Jun 16 07:27:30 fv-az72-309 40grub2[3243]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3244]: debug: parsing: echo 'Loading Linux 5.13.0-1029-azure ...' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3245]: debug: parsing: if [ "${initrdfail}" = 1 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3246]: debug: parsing: echo 'GRUB_FORCE_PARTUUID set, initrdless boot failed. Attempting with initrd.' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3247]: debug: parsing: linux /boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3251]: debug: parsing: echo 'Loading initial ramdisk ...' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3252]: debug: parsing: initrd /boot/initrd.img-5.13.0-1029-azure | |
Jun 16 07:27:30 fv-az72-309 40grub2[3256]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3257]: debug: parsing: echo 'GRUB_FORCE_PARTUUID set, attempting initrdless boot.' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3258]: debug: parsing: linux /boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3262]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3263]: debug: parsing: initrdfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3264]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3265]: result: /dev/sdb1:/dev/sdb1:Ubuntu, with Linux 5.13.0-1029-azure:/boot/vmlinuz-5.13.0-1029-azure:/boot/initrd.img-5.13.0-1029-azure:root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3266]: debug: parsing: menuentry 'Ubuntu, with Linux 5.13.0-1029-azure (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.13.0-1029-azure-recovery-3b66cf28-8c39-478b-82b0-294032b5bd9d' { | |
Jun 16 07:27:30 fv-az72-309 40grub2[3277]: debug: parsing: recordfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3278]: debug: parsing: load_video | |
Jun 16 07:27:30 fv-az72-309 40grub2[3279]: debug: parsing: insmod gzio | |
Jun 16 07:27:30 fv-az72-309 40grub2[3280]: debug: parsing: if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3281]: debug: parsing: insmod part_gpt | |
Jun 16 07:27:30 fv-az72-309 40grub2[3282]: debug: parsing: insmod ext2 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3283]: debug: parsing: if [ x$feature_platform_search_hint = xy ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3284]: debug: parsing: search --no-floppy --fs-uuid --set=root 3b66cf28-8c39-478b-82b0-294032b5bd9d | |
Jun 16 07:27:30 fv-az72-309 40grub2[3285]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3286]: debug: parsing: search --no-floppy --fs-uuid --set=root 3b66cf28-8c39-478b-82b0-294032b5bd9d | |
Jun 16 07:27:30 fv-az72-309 40grub2[3287]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3288]: debug: parsing: echo 'Loading Linux 5.13.0-1029-azure ...' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3289]: debug: parsing: if [ "${initrdfail}" = 1 ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3290]: debug: parsing: echo 'GRUB_FORCE_PARTUUID set, initrdless boot failed. Attempting with initrd.' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3291]: debug: parsing: linux /boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro recovery nomodeset dis_ucode_ldr console=tty1 console=ttyS0 earlyprintk=ttyS0 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3295]: debug: parsing: echo 'Loading initial ramdisk ...' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3296]: debug: parsing: initrd /boot/initrd.img-5.13.0-1029-azure | |
Jun 16 07:27:30 fv-az72-309 40grub2[3300]: debug: parsing: else | |
Jun 16 07:27:30 fv-az72-309 40grub2[3301]: debug: parsing: echo 'GRUB_FORCE_PARTUUID set, attempting initrdless boot.' | |
Jun 16 07:27:30 fv-az72-309 40grub2[3302]: debug: parsing: linux /boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro recovery nomodeset dis_ucode_ldr console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3306]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3307]: debug: parsing: initrdfail | |
Jun 16 07:27:30 fv-az72-309 40grub2[3308]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3309]: result: /dev/sdb1:/dev/sdb1:Ubuntu, with Linux 5.13.0-1029-azure (recovery mode):/boot/vmlinuz-5.13.0-1029-azure:/boot/initrd.img-5.13.0-1029-azure:root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro recovery nomodeset dis_ucode_ldr console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 16 07:27:30 fv-az72-309 40grub2[3310]: debug: parsing: } | |
Jun 16 07:27:30 fv-az72-309 40grub2[3311]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3312]: debug: parsing: ### END /etc/grub.d/10_linux ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3313]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3314]: debug: parsing: ### BEGIN /etc/grub.d/10_linux_zfs ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3315]: debug: parsing: ### END /etc/grub.d/10_linux_zfs ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3316]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3317]: debug: parsing: ### BEGIN /etc/grub.d/20_linux_xen ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3318]: debug: parsing: ### END /etc/grub.d/20_linux_xen ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3319]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3320]: debug: parsing: ### BEGIN /etc/grub.d/30_uefi-firmware ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3321]: debug: parsing: ### END /etc/grub.d/30_uefi-firmware ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3322]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3323]: debug: parsing: ### BEGIN /etc/grub.d/35_fwupd ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3324]: debug: parsing: ### END /etc/grub.d/35_fwupd ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3325]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3326]: debug: parsing: ### BEGIN /etc/grub.d/40_custom ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3327]: debug: parsing: # This file provides an easy way to add custom menu entries. Simply type the | |
Jun 16 07:27:30 fv-az72-309 40grub2[3328]: debug: parsing: # menu entries you want to add after this comment. Be careful not to change | |
Jun 16 07:27:30 fv-az72-309 40grub2[3329]: debug: parsing: # the 'exec tail' line above. | |
Jun 16 07:27:30 fv-az72-309 40grub2[3330]: debug: parsing: ### END /etc/grub.d/40_custom ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3331]: debug: parsing: | |
Jun 16 07:27:30 fv-az72-309 40grub2[3332]: debug: parsing: ### BEGIN /etc/grub.d/41_custom ### | |
Jun 16 07:27:30 fv-az72-309 40grub2[3333]: debug: parsing: if [ -f ${config_directory}/custom.cfg ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3334]: debug: parsing: source ${config_directory}/custom.cfg | |
Jun 16 07:27:30 fv-az72-309 40grub2[3335]: debug: parsing: elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then | |
Jun 16 07:27:30 fv-az72-309 40grub2[3336]: debug: parsing: source $prefix/custom.cfg; | |
Jun 16 07:27:30 fv-az72-309 40grub2[3337]: debug: parsing: fi | |
Jun 16 07:27:30 fv-az72-309 40grub2[3338]: debug: parsing: ### END /etc/grub.d/41_custom ### | |
Jun 16 07:27:30 fv-az72-309 50mounted-tests[3339]: debug: /usr/lib/linux-boot-probes/mounted/40grub2 succeeded | |
Jun 16 07:27:30 fv-az72-309 systemd[1]: var-lib-os\x2dprober-mount.mount: Succeeded. | |
Jun 16 07:27:30 fv-az72-309 linux-boot-prober[3343]: debug: linux detected by /usr/lib/linux-boot-probes/50mounted-tests | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: done | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Writing provisioner version to imagedata.json | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: The file /imagegeneration/imagedata.json has been updated with the provisioner version | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Copy /imagegeneration/imagedata.json file to the /home/runner/runners/2.292.0/.setup_info target | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: DistributedTask.MachineProvisioning.LinuxTaskAgentMachineRuntime[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Copy /imagegeneration/imagedata.json file to the /home/runner/runners/2.293.0/.setup_info target | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.Linux.LinuxMachineEnvironment[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Running Factory Provisioning for DevFabric | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.Linux.LinuxMachineEnvironment[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Creating marker file to indicate machine is clean, will be deleted when first request is executed | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.MachineProvisioner[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Removing provisioning setting | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.Linux.LinuxMachineEnvironment[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Reading settings from settings file at /opt/runner/provisioner/.settings... | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.Linux.LinuxMachineEnvironment[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Settings does not contain the Provisioning key... | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.Linux.LinuxMachineEnvironment[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Waiting for AuthorizationTokenFile to be dropped by Custom Script Extension... | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.Linux.LinuxMachineEnvironment[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Last edit of access token: 06/16/2022 07:24:40 | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.MachineProvisioner[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Provisioning successful | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.MachineProvisioner[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Finishing Provisioning Mode for fv-az72-309 | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: Microsoft.AzureDevOps.Provisioner.Framework.Monitoring.MonitoringService[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: Canceling monitoring tasks | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: Microsoft.AzureDevOps.Provisioner.Framework.Monitoring.MonitoringService[0] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: All Monitor jobs are cancelled. | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: [07:27:31] info: MachineManagement.Provisioning.MachineProvisioner[1001] | |
Jun 16 07:27:31 fv-az72-309 provisioner[1638]: The provisioner has stopped | |
Jun 16 07:27:31 fv-az72-309 systemd[1]: runner-provisioner.service: Succeeded. | |
Jun 16 07:27:36 fv-az72-309 python3[967]: 2022-06-16T07:27:36.649443Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 3] | |
Jun 16 07:27:36 fv-az72-309 python3[967]: 2022-06-16T07:27:36.691835Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0D28CC35EFA6D75FF8C5DEB4A1432FB1CD3E8FD9 | |
Jun 16 07:27:36 fv-az72-309 python3[967]: 2022-06-16T07:27:36.723933Z INFO ExtHandler ExtHandler Fetch goal state completed | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.739094Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 4a2fa388-8e18-4dcf-84c9-2c893784f22a New eTag: 14864452798849322451] | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.741103Z INFO ExtHandler ExtHandler ProcessExtensionsInGoalState started [Incarnation: 3; Activity Id: 4bc43ecc-4bc2-4426-b3c0-5bb2577170f3; Correlation Id: 0801e8a7-1dfe-4a0f-985a-00367c0d4ea1; GS Creation Time: 2022-06-16T07:27:32.251274Z] | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.741637Z INFO ExtHandler ExtHandler No extension/run-time settings settings found for Microsoft.Azure.Extensions.CustomScript | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.771432Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Target handler state: uninstall [incarnation 3] | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.771758Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] [Uninstall] current handler state is: enabled | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.772010Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Disable extension: [bin/custom-script-shim disable] | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.772375Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Executing command: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim disable with environment variables: {"AZURE_GUEST_AGENT_EXTENSION_PATH": "/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6", "AZURE_GUEST_AGENT_EXTENSION_VERSION": "2.1.6", "AZURE_GUEST_AGENT_WIRE_PROTOCOL_ADDRESS": "168.63.129.16", "ConfigSequenceNumber": "0", "AZURE_GUEST_AGENT_EXTENSION_SUPPORTED_FEATURES": "[{\"Key\": \"ExtensionTelemetryPipeline\", \"Value\": \"1.0\"}]"} | |
Jun 16 07:27:42 fv-az72-309 python3[967]: 2022-06-16T07:27:42.775840Z INFO ExtHandler ExtHandler Started extension in unit 'disable_a07754bb-7e8d-4fda-8f3f-ee686d3905a7.scope' | |
Jun 16 07:27:42 fv-az72-309 systemd[1]: Started /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim disable. | |
Jun 16 07:27:42 fv-az72-309 systemd[1]: disable_a07754bb-7e8d-4fda-8f3f-ee686d3905a7.scope: Succeeded. | |
Jun 16 07:27:44 fv-az72-309 python3[967]: 2022-06-16T07:27:44.779170Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Command: bin/custom-script-shim disable | |
Jun 16 07:27:44 fv-az72-309 python3[967]: [stdout] | |
Jun 16 07:27:44 fv-az72-309 python3[967]: + /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-extension disable | |
Jun 16 07:27:44 fv-az72-309 python3[967]: time=2022-06-16T07:27:42Z version=v2.1.6/git@fc181d8-dirty operation=disable seq=0 event=start | |
Jun 16 07:27:44 fv-az72-309 python3[967]: time=2022-06-16T07:27:42Z version=v2.1.6/git@fc181d8-dirty operation=disable seq=0 event=noop | |
Jun 16 07:27:44 fv-az72-309 python3[967]: time=2022-06-16T07:27:42Z version=v2.1.6/git@fc181d8-dirty operation=disable seq=0 event=end | |
Jun 16 07:27:44 fv-az72-309 python3[967]: [stderr] | |
Jun 16 07:27:44 fv-az72-309 python3[967]: Running scope as unit: disable_a07754bb-7e8d-4fda-8f3f-ee686d3905a7.scope | |
Jun 16 07:27:44 fv-az72-309 python3[967]: 2022-06-16T07:27:44.780765Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Uninstall extension [bin/custom-script-shim uninstall] | |
Jun 16 07:27:44 fv-az72-309 python3[967]: 2022-06-16T07:27:44.781186Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Executing command: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim uninstall with environment variables: {"AZURE_GUEST_AGENT_EXTENSION_PATH": "/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6", "AZURE_GUEST_AGENT_EXTENSION_VERSION": "2.1.6", "AZURE_GUEST_AGENT_WIRE_PROTOCOL_ADDRESS": "168.63.129.16", "ConfigSequenceNumber": "0", "AZURE_GUEST_AGENT_EXTENSION_SUPPORTED_FEATURES": "[{\"Key\": \"ExtensionTelemetryPipeline\", \"Value\": \"1.0\"}]"} | |
Jun 16 07:27:44 fv-az72-309 python3[967]: 2022-06-16T07:27:44.785031Z INFO ExtHandler ExtHandler Started extension in unit 'uninstall_dd9159ab-fa53-499c-9e0b-98a49c82d09f.scope' | |
Jun 16 07:27:44 fv-az72-309 systemd[1]: Started /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim uninstall. | |
Jun 16 07:27:44 fv-az72-309 systemd[1]: uninstall_dd9159ab-fa53-499c-9e0b-98a49c82d09f.scope: Succeeded. | |
Jun 16 07:27:46 fv-az72-309 python3[967]: 2022-06-16T07:27:46.788551Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Command: bin/custom-script-shim uninstall | |
Jun 16 07:27:46 fv-az72-309 python3[967]: [stdout] | |
Jun 16 07:27:46 fv-az72-309 python3[967]: + /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-extension uninstall | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 event=start | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 status="not reported for operation (by design)" | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 path=/var/lib/waagent/custom-script event="removing data dir" path=/var/lib/waagent/custom-script | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 path=/var/lib/waagent/custom-script event="removed data dir" | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 path=/var/lib/waagent/custom-script event=uninstalled | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 status="not reported for operation (by design)" | |
Jun 16 07:27:46 fv-az72-309 python3[967]: time=2022-06-16T07:27:44Z version=v2.1.6/git@fc181d8-dirty operation=uninstall seq=0 event=end | |
Jun 16 07:27:46 fv-az72-309 python3[967]: [stderr] | |
Jun 16 07:27:46 fv-az72-309 python3[967]: Running scope as unit: uninstall_dd9159ab-fa53-499c-9e0b-98a49c82d09f.scope | |
Jun 16 07:27:46 fv-az72-309 python3[967]: 2022-06-16T07:27:46.790365Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Remove extension handler directory: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6 | |
Jun 16 07:27:46 fv-az72-309 python3[967]: 2022-06-16T07:27:46.793715Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Remove the extension slice: Microsoft.Azure.Extensions.CustomScript-2.1.6 | |
Jun 16 07:27:46 fv-az72-309 python3[967]: 2022-06-16T07:27:46.794064Z INFO ExtHandler ExtHandler Stopped tracking cgroup Microsoft.Azure.Extensions.CustomScript-2.1.6 [/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.Azure.Extensions.CustomScript_2.1.6.slice] | |
Jun 16 07:27:46 fv-az72-309 python3[967]: 2022-06-16T07:27:46.795163Z INFO ExtHandler ExtHandler [CGI] Removed /lib/systemd/system/azure-vmextensions-Microsoft.Azure.Extensions.CustomScript_2.1.6.slice | |
Jun 16 07:27:46 fv-az72-309 python3[967]: 2022-06-16T07:27:46.796633Z INFO ExtHandler ExtHandler ProcessExtensionsInGoalState completed [Incarnation: 3; 4055 ms; Activity Id: 4bc43ecc-4bc2-4426-b3c0-5bb2577170f3; Correlation Id: 0801e8a7-1dfe-4a0f-985a-00367c0d4ea1; GS Creation Time: 2022-06-16T07:27:32.251274Z] | |
Jun 16 07:28:03 fv-az72-309 kernel: hv_utils: Shutdown request received - graceful shutdown initiated | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found ordering cycle on mnt-swapfile.swap/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on swap.target/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on run-snapd-ns.mount/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on local-fs.target/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on systemd-tmpfiles-setup.service/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on systemd-resolved.service/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on network.target/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on network-online.target/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Found dependency on mnt.mount/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: mnt.mount: Job mnt-swapfile.swap/stop deleted to break ordering cycle starting with mnt.mount/stop | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Removed slice system-modprobe.slice. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Removed slice system-walinuxagent.extensions.slice. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Cloud-init target. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Graphical Interface. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Host and Network Name Lookups. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Timers. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: e2scrub_all.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Periodic ext4 Online Metadata Check for All Filesystems. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: fstrim.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Discard unused blocks once a week. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: logrotate.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Daily rotation of log files. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: man-db.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 ModemManager[728]: <info> caught signal, shutting down... | |
Jun 16 07:28:03 fv-az72-309 dockerd[1061]: time="2022-06-16T07:28:03.490671391Z" level=info msg="Processing signal 'terminated'" | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Daily man-db regeneration. | |
Jun 16 07:28:03 fv-az72-309 rsyslogd[689]: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="689" x-info="https://www.rsyslog.com"] exiting on signal 15. | |
Jun 16 07:28:03 fv-az72-309 snapd[695]: main.go:155: Exiting on terminated signal. | |
Jun 16 07:28:03 fv-az72-309 snapd[695]: overlord.go:504: Released state lock file | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: motd-news.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Message of the Day. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: phpsessionclean.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 ModemManager[728]: <info> ModemManager is shut down | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Clean PHP session files every 30 mins. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-tmpfiles-clean.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Daily Cleanup of Temporary Directories. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: ua-timer.timer: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Ubuntu Advantage Timer for running repeated jobs. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: lvm2-lvmpolld.socket: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Closed LVM2 poll daemon socket. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-rfkill.socket: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Closed Load/Save RF Kill Switch Status /dev/rfkill Watch. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Accounts Service... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Availability of block devices... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: cloud-final.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Execute cloud user/final scripts. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Multi-User System. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Login Prompts. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Modem Manager... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping LSB: automatic crash report generation... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Deferred execution scheduler... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Regular background program processing daemon... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping D-Bus System Message Bus... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Docker Application Container Engine... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: ephemeral-disk-warning.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Write warning to Azure ephemeral disk. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: cloud-config.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Apply the settings specified in cloud-config. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Cloud-config availability. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Create final runtime dir for shutdown pivot root... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Getty on tty1... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping irqbalance daemon... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping LSB: Mono XSP4... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Dispatcher daemon for systemd-networkd... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping PackageKit Daemon... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping The PHP 7.4 FastCGI Process Manager... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping The PHP 8.0 FastCGI Process Manager... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping The PHP 8.1 FastCGI Process Manager... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping System Logging Service... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Serial Getty on ttyS0... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: snapd.seeded.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Wait until snapd is fully seeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Snap Daemon... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Condition check resulted in Ubuntu core (all-snaps) system shutdown helper setup service being skipped. | |
Jun 16 07:28:03 fv-az72-309 blkdeactivate[3616]: Deactivating block devices: | |
Jun 16 07:28:03 fv-az72-309 python3[706]: 2022-06-16T07:28:03.533246Z INFO Daemon Agent WALinuxAgent-2.2.46 forwarding signal 15 to WALinuxAgent-2.7.1.0 | |
Jun 16 07:28:03 fv-az72-309 sshd[1163]: Received signal 15; terminating. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping LSB: Fast standalone full-text SQL search engine... | |
Jun 16 07:28:03 fv-az72-309 sphinxsearch[3630]: To enable sphinxsearch, edit /etc/default/sphinxsearch and set START=yes | |
Jun 16 07:28:03 fv-az72-309 php-fpm7.4[685]: aster process (/etc/php/7.4/fpm/php-fpm.conf): DIGEST-MD5 common mech free | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping OpenBSD Secure Shell server... | |
Jun 16 07:28:03 fv-az72-309 dockerd[1061]: time="2022-06-16T07:28:03.577094019Z" level=info msg="Daemon shutdown complete" | |
Jun 16 07:28:03 fv-az72-309 php-fpm8.1[687]: aster process (/etc/php/8.1/fpm/php-fpm.conf): DIGEST-MD5 common mech free | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: sysstat.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Resets System Activity Data Collector. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Login Service... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Disk Manager... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Azure Linux Agent... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: accounts-daemon.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Accounts Service. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: cron.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Regular background program processing daemon. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: irqbalance.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped irqbalance daemon. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: networkd-dispatcher.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Dispatcher daemon for systemd-networkd. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: rsyslog.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped System Logging Service. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: snapd.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Snap Daemon. | |
Jun 16 07:28:03 fv-az72-309 udisksd[705]: udisks daemon version 2.8.4 exiting | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: walinuxagent.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Azure Linux Agent. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: walinuxagent.service: Consumed 2.466s CPU time. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: atd.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Deferred execution scheduler. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: ModemManager.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Modem Manager. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: [email protected]: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Serial Getty on ttyS0. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: [email protected]: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Getty on tty1. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: docker.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Docker Application Container Engine. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: ssh.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped OpenBSD Secure Shell server. | |
Jun 16 07:28:03 fv-az72-309 php-fpm8.0[686]: aster process (/etc/php/8.0/fpm/php-fpm.conf): DIGEST-MD5 common mech free | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: packagekit.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped PackageKit Daemon. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: blk-availability.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Availability of block devices. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: dbus.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped D-Bus System Message Bus. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: sphinxsearch.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped LSB: Fast standalone full-text SQL search engine. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: php7.4-fpm.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped The PHP 7.4 FastCGI Process Manager. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-logind.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Login Service. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: php8.0-fpm.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped The PHP 8.0 FastCGI Process Manager. | |
Jun 16 07:28:03 fv-az72-309 mono-xsp4[3623]: * Stopping XSP 4.0 WebServer mono-xsp4 | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: udisks2.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Disk Manager. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: php8.1-fpm.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped The PHP 8.1 FastCGI Process Manager. | |
Jun 16 07:28:03 fv-az72-309 apport[3617]: * Stopping automatic crash report generation: apport | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Removed slice system-getty.slice. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Removed slice system-serial\x2dgetty.slice. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Network is Online. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target System Time Synchronized. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target System Time Set. | |
Jun 16 07:28:03 fv-az72-309 chronyd[676]: chronyd exiting | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping chrony, an NTP client/server... | |
Jun 16 07:28:03 fv-az72-309 containerd[709]: time="2022-06-16T07:28:03.720961156Z" level=info msg="Stop CRI service" | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping containerd container runtime... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Condition check resulted in Show Plymouth Power Off Screen being skipped. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Authorization Manager... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-networkd-wait-online.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Wait for Network to be Configured. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Permit User Sessions... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: containerd.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped containerd container runtime. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-user-sessions.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Permit User Sessions. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target User and Group Name Lookups. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: polkit.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Authorization Manager. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: chrony.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 apport[3617]: ...done. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped chrony, an NTP client/server. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: apport.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped LSB: automatic crash report generation. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Network. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Network Name Resolution... | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-resolved.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Network Name Resolution. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopping Network Service... | |
Jun 16 07:28:03 fv-az72-309 systemd-networkd[529]: eth0: DHCP lease lost | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-networkd.service: Succeeded. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped Network Service. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Requested transaction contradicts existing jobs: Transaction for systemd-networkd.service/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction). | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-networkd.socket: Failed to queue service startup job (Maybe the service file is missing or not a non-template unit?): Transaction for systemd-networkd.service/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction). | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: systemd-networkd.socket: Failed with result 'resources'. | |
Jun 16 07:28:03 fv-az72-309 systemd[1]: Stopped target Network (Pre). | |
Jun 16 07:28:04 fv-az72-309 mono-xsp4[3623]: ...done. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: mono-xsp4.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped LSB: Mono XSP4. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Basic System. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Paths. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: ua-license-check.path: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Trigger to poll for Ubuntu Pro licenses (Only enabled on GCP LTS non-pro). | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Remote File Systems. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Remote File Systems (Pre). | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Slices. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Removed slice User and Session Slice. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Sockets. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: cloud-init-hotplugd.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed cloud-init hotplug hook socket. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: dbus.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed D-Bus System Message Bus Socket. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: docker.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed Docker Socket for the API. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: iscsid.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed Open-iSCSI iscsid Socket. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: snap.lxd.daemon.unix.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed Socket unix for snap application lxd.daemon. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: snapd.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed Socket activation for snappy daemon. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: syslog.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed Syslog Socket. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: uuidd.socket: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Closed UUID daemon activation socket. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target System Initialization. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Local Encrypted Volumes. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-ask-password-console.path: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-ask-password-wall.path: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Forward Password Requests to Wall Directory Watch. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: cloud-init-local.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Initial cloud-init job (pre-networking). | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopping Entropy daemon using the HAVEGE algorithm... | |
Jun 16 07:28:04 fv-az72-309 haveged[469]: haveged: Stopping due to signal 15 | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopping Hyper-V KVP Protocol Daemon... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-sysctl.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Apply Kernel Variables. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-modules-load.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Load Kernel Modules. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopping Update UTMP about System Boot/Shutdown... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: haveged.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Entropy daemon using the HAVEGE algorithm. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopping Load/Save Random Seed... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: hv-kvp-daemon.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Hyper-V KVP Protocol Daemon. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-update-utmp.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Update UTMP about System Boot/Shutdown. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-tmpfiles-setup.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Create Volatile Files and Directories. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-random-seed.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Load/Save Random Seed. | |
Jun 16 07:28:04 fv-az72-309 systemd-tmpfiles[3814]: [/run/finalrd-libs.conf:9] Duplicate line for path "/run/initramfs/lib64", ignoring. | |
Jun 16 07:28:04 fv-az72-309 finalrd[3815]: run-parts: executing /usr/share/finalrd/mdadm.finalrd setup | |
Jun 16 07:28:04 fv-az72-309 finalrd[3815]: run-parts: executing /usr/share/finalrd/open-iscsi.finalrd setup | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: finalrd.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Create final runtime dir for shutdown pivot root. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Local File Systems. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounting /boot/efi... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounting /run/snapd/ns/lxd.mnt... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounting Mount unit for core20, revision 1518... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounting Mount unit for lxd, revision 22753... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounting Mount unit for snapd, revision 16010... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: boot-efi.mount: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounted /boot/efi. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: run-snapd-ns-lxd.mnt.mount: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounted /run/snapd/ns/lxd.mnt. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: snap-core20-1518.mount: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounted Mount unit for core20, revision 1518. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: snap-lxd-22753.mount: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounted Mount unit for lxd, revision 22753. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: snap-snapd-16010.mount: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounted Mount unit for snapd, revision 16010. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounting /run/snapd/ns... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-7167\x2d9500.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped File System Check on /dev/disk/by-uuid/7167-9500. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Removed slice system-systemd\x2dfsck.slice. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: run-snapd-ns.mount: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Unmounted /run/snapd/ns. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Local File Systems (Pre). | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped target Swap. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Reached target Unmount All Filesystems. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopping Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... | |
Jun 16 07:28:04 fv-az72-309 multipathd[397]: exit (signal) | |
Jun 16 07:28:04 fv-az72-309 multipathd[397]: --------shut down------- | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopping Device-Mapper Multipath Device Controller... | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-tmpfiles-setup-dev.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Create Static Device Nodes in /dev. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-sysusers.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Create System Users. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-remount-fs.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Remount Root and Kernel File Systems. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-fsck-root.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped File System Check on Root Device. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: multipathd.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Device-Mapper Multipath Device Controller. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: lvm2-monitor.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Stopped Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Reached target Shutdown. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Reached target Final Step. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: systemd-poweroff.service: Succeeded. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Finished Power-Off. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Reached target Power-Off. | |
Jun 16 07:28:04 fv-az72-309 systemd[1]: Shutting down. | |
Jun 16 07:28:04 fv-az72-309 systemd-shutdown[1]: Syncing filesystems and block devices. | |
Jun 16 07:28:05 fv-az72-309 systemd-shutdown[1]: Sending SIGTERM to remaining processes... | |
Jun 16 07:28:05 fv-az72-309 systemd-journald[189]: Journal stopped | |
-- Reboot -- | |
Jun 20 00:09:31 fv-az72-309 kernel: Linux version 5.13.0-1029-azure (buildd@lcy02-amd64-051) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #34~20.04.1-Ubuntu SMP Thu Jun 9 12:37:07 UTC 2022 (Ubuntu 5.13.0-1029.34~20.04.1-azure 5.13.19) | |
Jun 20 00:09:31 fv-az72-309 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 20 00:09:31 fv-az72-309 kernel: KERNEL supported cpus: | |
Jun 20 00:09:31 fv-az72-309 kernel: Intel GenuineIntel | |
Jun 20 00:09:31 fv-az72-309 kernel: AMD AuthenticAMD | |
Jun 20 00:09:31 fv-az72-309 kernel: Hygon HygonGenuine | |
Jun 20 00:09:31 fv-az72-309 kernel: Centaur CentaurHauls | |
Jun 20 00:09:31 fv-az72-309 kernel: zhaoxin Shanghai | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/fpu: Enabled xstate features 0xff, context size is 2560 bytes, using 'compacted' format. | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-provided physical RAM map: | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ffeffff] usable | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x000000003fff0000-0x000000003fffefff] ACPI data | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] ACPI NVS | |
Jun 20 00:09:31 fv-az72-309 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable | |
Jun 20 00:09:31 fv-az72-309 kernel: printk: bootconsole [earlyser0] enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: NX (Execute Disable) protection: active | |
Jun 20 00:09:31 fv-az72-309 kernel: SMBIOS 2.3 present. | |
Jun 20 00:09:31 fv-az72-309 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008 12/07/2018 | |
Jun 20 00:09:31 fv-az72-309 kernel: Hypervisor detected: Microsoft Hyper-V | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3880b0, hints 0x60c2c, misc 0xed7b2 | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V Host Build:18362-10.0-3-0.3446 | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V: LAPIC Timer Frequency: 0xc3500 | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V: Using hypercall for remote TLB flush | |
Jun 20 00:09:31 fv-az72-309 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns | |
Jun 20 00:09:31 fv-az72-309 kernel: tsc: Marking TSC unstable due to running on Hyper-V | |
Jun 20 00:09:31 fv-az72-309 kernel: tsc: Detected 2095.196 MHz processor | |
Jun 20 00:09:31 fv-az72-309 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable | |
Jun 20 00:09:31 fv-az72-309 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000 | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT | |
Jun 20 00:09:31 fv-az72-309 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: last_pfn = 0x3fff0 max_arch_pfn = 0x400000000 | |
Jun 20 00:09:31 fv-az72-309 kernel: found SMP MP-table at [mem 0x000ff780-0x000ff78f] | |
Jun 20 00:09:31 fv-az72-309 kernel: Using GB pages for direct mapping | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Early table checksum verification disabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: RSDP 0x00000000000F5C00 000014 (v00 ACPIAM) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: RSDT 0x000000003FFF0000 000040 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: FACP 0x000000003FFF0200 000081 (v02 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: DSDT 0x000000003FFF1D24 003CD5 (v01 MSFTVM MSFTVM02 00000002 INTL 02002026) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: FACS 0x000000003FFFF000 000040 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: WAET 0x000000003FFF1A80 000028 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SLIC 0x000000003FFF1AC0 000176 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: OEM0 0x000000003FFF1CC0 000064 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT 0x000000003FFF0800 000140 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: APIC 0x000000003FFF0300 000062 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: OEMB 0x000000003FFFF040 000064 (v01 VRTUAL MICROSFT 12001807 MSFT 00000097) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff0200-0x3fff0280] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving DSDT table memory at [mem 0x3fff1d24-0x3fff59f8] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving FACS table memory at [mem 0x3ffff000-0x3ffff03f] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff1a80-0x3fff1aa7] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving SLIC table memory at [mem 0x3fff1ac0-0x3fff1c35] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff1cc0-0x3fff1d23] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving SRAT table memory at [mem 0x3fff0800-0x3fff093f] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving APIC table memory at [mem 0x3fff0300-0x3fff0361] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Reserving OEMB table memory at [mem 0x3ffff040-0x3ffff0a3] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Local APIC address 0xfee00000 | |
Jun 20 00:09:31 fv-az72-309 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x27fffffff] hotplug | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x280200000-0xfdfffffff] hotplug | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000200000-0x1ffffffffff] hotplug | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000200000-0x3ffffffffff] hotplug | |
Jun 20 00:09:31 fv-az72-309 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x27fffffff] -> [mem 0x00000000-0x27fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: NODE_DATA(0) allocated [mem 0x27ffd6000-0x27fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: Zone ranges: | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: Normal [mem 0x0000000100000000-0x000000027fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: Device empty | |
Jun 20 00:09:31 fv-az72-309 kernel: Movable zone start for each node | |
Jun 20 00:09:31 fv-az72-309 kernel: Early memory node ranges | |
Jun 20 00:09:31 fv-az72-309 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] | |
Jun 20 00:09:31 fv-az72-309 kernel: node 0: [mem 0x0000000000100000-0x000000003ffeffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: node 0: [mem 0x0000000100000000-0x000000027fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: On node 0 totalpages: 1834894 | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA zone: 64 pages used for memmap | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA zone: 158 pages reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA zone: 3998 pages, LIFO batch:0 | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA32 zone: 4032 pages used for memmap | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA32 zone: 258032 pages, LIFO batch:63 | |
Jun 20 00:09:31 fv-az72-309 kernel: Normal zone: 24576 pages used for memmap | |
Jun 20 00:09:31 fv-az72-309 kernel: Normal zone: 1572864 pages, LIFO batch:63 | |
Jun 20 00:09:31 fv-az72-309 kernel: On node 0, zone DMA: 1 pages in unavailable ranges | |
Jun 20 00:09:31 fv-az72-309 kernel: On node 0, zone DMA: 97 pages in unavailable ranges | |
Jun 20 00:09:31 fv-az72-309 kernel: On node 0, zone Normal: 16 pages in unavailable ranges | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PM-Timer IO Port: 0x408 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Local APIC address 0xfee00000 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) | |
Jun 20 00:09:31 fv-az72-309 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: IRQ0 used by override. | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: IRQ9 used by override. | |
Jun 20 00:09:31 fv-az72-309 kernel: Using ACPI (MADT) for SMP configuration information | |
Jun 20 00:09:31 fv-az72-309 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000dffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x000e0000-0x000fffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x3fff0000-0x3fffefff] | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x3ffff000-0x3fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: hibernation: Registered nosave memory: [mem 0x40000000-0xffffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: [mem 0x40000000-0xffffffff] available for PCI devices | |
Jun 20 00:09:31 fv-az72-309 kernel: Booting paravirtualized kernel on Hyper-V | |
Jun 20 00:09:31 fv-az72-309 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns | |
Jun 20 00:09:31 fv-az72-309 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 | |
Jun 20 00:09:31 fv-az72-309 kernel: percpu: Embedded 63 pages/cpu s221184 r8192 d28672 u1048576 | |
Jun 20 00:09:31 fv-az72-309 kernel: pcpu-alloc: s221184 r8192 d28672 u1048576 alloc=1*2097152 | |
Jun 20 00:09:31 fv-az72-309 kernel: pcpu-alloc: [0] 0 1 | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V: PV spinlocks enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1806064 | |
Jun 20 00:09:31 fv-az72-309 kernel: Policy zone: Normal | |
Jun 20 00:09:31 fv-az72-309 kernel: Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.13.0-1029-azure root=PARTUUID=2b1f5b8e-4041-4065-af1c-792f94a6d205 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 panic=-1 | |
Jun 20 00:09:31 fv-az72-309 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: mem auto-init: stack:off, heap alloc:on, heap free:off | |
Jun 20 00:09:31 fv-az72-309 kernel: Memory: 7105684K/7339576K available (14346K kernel code, 3432K rwdata, 9780K rodata, 2608K init, 6104K bss, 233632K reserved, 0K cma-reserved) | |
Jun 20 00:09:31 fv-az72-309 kernel: random: get_random_u64 called from __kmem_cache_create+0x2d/0x440 with crng_init=0 | |
Jun 20 00:09:31 fv-az72-309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 | |
Jun 20 00:09:31 fv-az72-309 kernel: Kernel/User page tables isolation: enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ftrace: allocating 46678 entries in 183 pages | |
Jun 20 00:09:31 fv-az72-309 kernel: ftrace: allocated 183 pages with 6 groups | |
Jun 20 00:09:31 fv-az72-309 kernel: rcu: Hierarchical RCU implementation. | |
Jun 20 00:09:31 fv-az72-309 kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=2. | |
Jun 20 00:09:31 fv-az72-309 kernel: Rude variant of Tasks RCU enabled. | |
Jun 20 00:09:31 fv-az72-309 kernel: Tracing variant of Tasks RCU enabled. | |
Jun 20 00:09:31 fv-az72-309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies. | |
Jun 20 00:09:31 fv-az72-309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 | |
Jun 20 00:09:31 fv-az72-309 kernel: NR_IRQS: 524544, nr_irqs: 440, preallocated irqs: 16 | |
Jun 20 00:09:31 fv-az72-309 kernel: random: crng done (trusting CPU's manufacturer) | |
Jun 20 00:09:31 fv-az72-309 kernel: Console: colour VGA+ 80x25 | |
Jun 20 00:09:31 fv-az72-309 kernel: printk: console [tty1] enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: printk: console [ttyS0] enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: printk: bootconsole [earlyser0] disabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Core revision 20210331 | |
Jun 20 00:09:31 fv-az72-309 kernel: APIC: Switch to symmetric I/O mode setup | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V: Using IPI hypercalls | |
Jun 20 00:09:31 fv-az72-309 kernel: Hyper-V: Using enlightened APIC (xapic mode) | |
Jun 20 00:09:31 fv-az72-309 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 | |
Jun 20 00:09:31 fv-az72-309 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4190.39 BogoMIPS (lpj=8380784) | |
Jun 20 00:09:31 fv-az72-309 kernel: pid_max: default: 32768 minimum: 301 | |
Jun 20 00:09:31 fv-az72-309 kernel: LSM: Security Framework initializing | |
Jun 20 00:09:31 fv-az72-309 kernel: Yama: becoming mindful. | |
Jun 20 00:09:31 fv-az72-309 kernel: AppArmor: AppArmor initialized | |
Jun 20 00:09:31 fv-az72-309 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 | |
Jun 20 00:09:31 fv-az72-309 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 | |
Jun 20 00:09:31 fv-az72-309 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization | |
Jun 20 00:09:31 fv-az72-309 kernel: Spectre V2 : Mitigation: Retpolines | |
Jun 20 00:09:31 fv-az72-309 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch | |
Jun 20 00:09:31 fv-az72-309 kernel: Speculative Store Bypass: Vulnerable | |
Jun 20 00:09:31 fv-az72-309 kernel: TAA: Mitigation: Clear CPU buffers | |
Jun 20 00:09:31 fv-az72-309 kernel: MDS: Mitigation: Clear CPU buffers | |
Jun 20 00:09:31 fv-az72-309 kernel: Freeing SMP alternatives memory: 40K | |
Jun 20 00:09:31 fv-az72-309 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x4) | |
Jun 20 00:09:31 fv-az72-309 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. | |
Jun 20 00:09:31 fv-az72-309 kernel: rcu: Hierarchical SRCU implementation. | |
Jun 20 00:09:31 fv-az72-309 kernel: NMI watchdog: Perf NMI watchdog permanently disabled | |
Jun 20 00:09:31 fv-az72-309 kernel: smp: Bringing up secondary CPUs ... | |
Jun 20 00:09:31 fv-az72-309 kernel: x86: Booting SMP configuration: | |
Jun 20 00:09:31 fv-az72-309 kernel: .... node #0, CPUs: #1 | |
Jun 20 00:09:31 fv-az72-309 kernel: smp: Brought up 1 node, 2 CPUs | |
Jun 20 00:09:31 fv-az72-309 kernel: smpboot: Max logical packages: 1 | |
Jun 20 00:09:31 fv-az72-309 kernel: smpboot: Total of 2 processors activated (8380.78 BogoMIPS) | |
Jun 20 00:09:31 fv-az72-309 kernel: devtmpfs: initialized | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/mm: Memory block size: 128MB | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: Registering ACPI NVS region [mem 0x3ffff000-0x3fffffff] (4096 bytes) | |
Jun 20 00:09:31 fv-az72-309 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns | |
Jun 20 00:09:31 fv-az72-309 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: pinctrl core: initialized pinctrl subsystem | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: RTC time: 00:09:29, date: 2022-06-20 | |
Jun 20 00:09:31 fv-az72-309 kernel: NET: Registered protocol family 16 | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations | |
Jun 20 00:09:31 fv-az72-309 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations | |
Jun 20 00:09:31 fv-az72-309 kernel: audit: initializing netlink subsys (disabled) | |
Jun 20 00:09:31 fv-az72-309 kernel: audit: type=2000 audit(1655683769.152:1): state=initialized audit_enabled=0 res=1 | |
Jun 20 00:09:31 fv-az72-309 kernel: thermal_sys: Registered thermal governor 'fair_share' | |
Jun 20 00:09:31 fv-az72-309 kernel: thermal_sys: Registered thermal governor 'bang_bang' | |
Jun 20 00:09:31 fv-az72-309 kernel: thermal_sys: Registered thermal governor 'step_wise' | |
Jun 20 00:09:31 fv-az72-309 kernel: thermal_sys: Registered thermal governor 'user_space' | |
Jun 20 00:09:31 fv-az72-309 kernel: thermal_sys: Registered thermal governor 'power_allocator' | |
Jun 20 00:09:31 fv-az72-309 kernel: EISA bus registered | |
Jun 20 00:09:31 fv-az72-309 kernel: cpuidle: using governor ladder | |
Jun 20 00:09:31 fv-az72-309 kernel: cpuidle: using governor menu | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: bus type PCI registered | |
Jun 20 00:09:31 fv-az72-309 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI: Using configuration type 1 for base access | |
Jun 20 00:09:31 fv-az72-309 kernel: Kprobes globally optimized | |
Jun 20 00:09:31 fv-az72-309 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages | |
Jun 20 00:09:31 fv-az72-309 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(Module Device) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(Processor Device) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(Processor Aggregator Device) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(Linux-Dell-Video) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Interpreter enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: (supports S0 S5) | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Using IOAPIC for interrupt routing | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) | |
Jun 20 00:09:31 fv-az72-309 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] | |
Jun 20 00:09:31 fv-az72-309 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI host bridge to bus 0000:00 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: root bus resource [mem 0x40000000-0xfffbffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: root bus resource [mem 0xfe0000000-0xfffffffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:00.0: [8086:7192] type 00 class 0x060000 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x010180 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.1: reg 0x20: [io 0xffa0-0xffaf] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] | |
Jun 20 00:09:31 fv-az72-309 kernel: * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, | |
* this clock source is slow. Consider trying other clock sources | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.3: acpi_pm_check_blacklist+0x0/0x20 took 11718 usecs | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:07.3: quirk: [io 0x0400-0x043f] claimed by PIIX4 ACPI | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:08.0: [1414:5353] type 00 class 0x030000 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:08.0: reg 0x10: [mem 0xf8000000-0xfbffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:08.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 11 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKB disabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKC disabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: PCI: Interrupt link LNKD disabled | |
Jun 20 00:09:31 fv-az72-309 kernel: iommu: Default domain type: Translated | |
Jun 20 00:09:31 fv-az72-309 kernel: SCSI subsystem initialized | |
Jun 20 00:09:31 fv-az72-309 kernel: libata version 3.00 loaded. | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:08.0: vgaarb: setting as boot VGA device | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:08.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:08.0: vgaarb: bridge control possible | |
Jun 20 00:09:31 fv-az72-309 kernel: vgaarb: loaded | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: bus type USB registered | |
Jun 20 00:09:31 fv-az72-309 kernel: usbcore: registered new interface driver usbfs | |
Jun 20 00:09:31 fv-az72-309 kernel: usbcore: registered new interface driver hub | |
Jun 20 00:09:31 fv-az72-309 kernel: usbcore: registered new device driver usb | |
Jun 20 00:09:31 fv-az72-309 kernel: pps_core: LinuxPPS API ver. 1 registered | |
Jun 20 00:09:31 fv-az72-309 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <[email protected]> | |
Jun 20 00:09:31 fv-az72-309 kernel: PTP clock support registered | |
Jun 20 00:09:31 fv-az72-309 kernel: EDAC MC: Ver: 3.0.0 | |
Jun 20 00:09:31 fv-az72-309 kernel: hv_vmbus: Vmbus version:4.0 | |
Jun 20 00:09:31 fv-az72-309 kernel: hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60 | |
Jun 20 00:09:31 fv-az72-309 kernel: NetLabel: Initializing | |
Jun 20 00:09:31 fv-az72-309 kernel: NetLabel: domain hash size = 128 | |
Jun 20 00:09:31 fv-az72-309 kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO | |
Jun 20 00:09:31 fv-az72-309 kernel: NetLabel: unlabeled traffic allowed by default | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI: Using ACPI for IRQ routing | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI: pci_cache_line_size set to 64 bytes | |
Jun 20 00:09:31 fv-az72-309 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: e820: reserve RAM buffer [mem 0x3fff0000-0x3fffffff] | |
Jun 20 00:09:31 fv-az72-309 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page | |
Jun 20 00:09:31 fv-az72-309 kernel: VFS: Disk quotas dquot_6.6.0 | |
Jun 20 00:09:31 fv-az72-309 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) | |
Jun 20 00:09:31 fv-az72-309 kernel: AppArmor: AppArmor Filesystem Enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp: PnP ACPI init | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:01: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:02: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:03: [dma 0 disabled] | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:04: [dma 0 disabled] | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:05: [dma 2] | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp 00:05: Plug and Play ACPI device, IDs PNP0700 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x01e0-0x01ef] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x0160-0x016f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x0278-0x027f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x0378-0x037f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x0678-0x067f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x0778-0x077f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: [io 0x04d0-0x04d1] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:07: [io 0x0400-0x043f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:07: [io 0x0370-0x0371] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:07: [io 0x0440-0x044f] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:07: [mem 0xfec00000-0xfec00fff] could not be reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:07: [mem 0xfee00000-0xfee00fff] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:08: [mem 0x00000000-0x0009ffff] could not be reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:08: [mem 0x000c0000-0x000dffff] could not be reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:08: [mem 0x000e0000-0x000fffff] could not be reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:08: [mem 0x00100000-0x3fffffff] could not be reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:08: [mem 0xfffc0000-0xffffffff] has been reserved | |
Jun 20 00:09:31 fv-az72-309 kernel: system 00:08: Plug and Play ACPI device, IDs PNP0c01 (active) | |
Jun 20 00:09:31 fv-az72-309 kernel: pnp: PnP ACPI: found 9 devices | |
Jun 20 00:09:31 fv-az72-309 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns | |
Jun 20 00:09:31 fv-az72-309 kernel: NET: Registered protocol family 2 | |
Jun 20 00:09:31 fv-az72-309 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: TCP: Hash tables configured (established 65536 bind 65536) | |
Jun 20 00:09:31 fv-az72-309 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) | |
Jun 20 00:09:31 fv-az72-309 kernel: NET: Registered protocol family 1 | |
Jun 20 00:09:31 fv-az72-309 kernel: NET: Registered protocol family 44 | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: resource 7 [mem 0x40000000-0xfffbffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci_bus 0000:00: resource 8 [mem 0xfe0000000-0xfffffffff window] | |
Jun 20 00:09:31 fv-az72-309 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI: CLS 0 bytes, default 64 | |
Jun 20 00:09:31 fv-az72-309 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) | |
Jun 20 00:09:31 fv-az72-309 kernel: software IO TLB: mapped [mem 0x000000003bff0000-0x000000003fff0000] (64MB) | |
Jun 20 00:09:31 fv-az72-309 kernel: Initialise system trusted keyrings | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type blacklist registered | |
Jun 20 00:09:31 fv-az72-309 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0 | |
Jun 20 00:09:31 fv-az72-309 kernel: zbud: loaded | |
Jun 20 00:09:31 fv-az72-309 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher | |
Jun 20 00:09:31 fv-az72-309 kernel: fuse: init (API version 7.34) | |
Jun 20 00:09:31 fv-az72-309 kernel: integrity: Platform Keyring initialized | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type asymmetric registered | |
Jun 20 00:09:31 fv-az72-309 kernel: Asymmetric key parser 'x509' registered | |
Jun 20 00:09:31 fv-az72-309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 244) | |
Jun 20 00:09:31 fv-az72-309 kernel: io scheduler mq-deadline registered | |
Jun 20 00:09:31 fv-az72-309 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 | |
Jun 20 00:09:31 fv-az72-309 kernel: hv_vmbus: registering driver hv_pci | |
Jun 20 00:09:31 fv-az72-309 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 | |
Jun 20 00:09:31 fv-az72-309 kernel: ACPI: button: Power Button [PWRF] | |
Jun 20 00:09:31 fv-az72-309 kernel: Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A | |
Jun 20 00:09:31 fv-az72-309 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A | |
Jun 20 00:09:31 fv-az72-309 kernel: Linux agpgart interface v0.103 | |
Jun 20 00:09:31 fv-az72-309 kernel: loop: module loaded | |
Jun 20 00:09:31 fv-az72-309 kernel: hv_vmbus: registering driver hv_storvsc | |
Jun 20 00:09:31 fv-az72-309 kernel: ata_piix 0000:00:07.1: version 2.13 | |
Jun 20 00:09:31 fv-az72-309 kernel: ata_piix 0000:00:07.1: Hyper-V Virtual Machine detected, ATA device ignore set | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi host0: storvsc_host_t | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi host3: storvsc_host_t | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi host4: ata_piix | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi host2: storvsc_host_t | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi host1: storvsc_host_t | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi host5: ata_piix | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi: waiting for bus probes to complete ... | |
Jun 20 00:09:31 fv-az72-309 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0xffa0 irq 14 | |
Jun 20 00:09:31 fv-az72-309 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: tun: Universal TUN/TAP device driver, 1.6 | |
Jun 20 00:09:31 fv-az72-309 kernel: PPP generic driver version 2.4.2 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: [sda] 180355072 512-byte logical blocks: (92.3 GB/86.0 GiB) | |
Jun 20 00:09:31 fv-az72-309 kernel: VFIO - User Level meta-driver version: 0.3 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks | |
Jun 20 00:09:31 fv-az72-309 kernel: i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: [sda] Write Protect is off | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA | |
Jun 20 00:09:31 fv-az72-309 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 | |
Jun 20 00:09:31 fv-az72-309 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 | |
Jun 20 00:09:31 fv-az72-309 kernel: scsi 1:0:1:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 | |
Jun 20 00:09:31 fv-az72-309 kernel: mousedev: PS/2 mouse device common for all mice | |
Jun 20 00:09:31 fv-az72-309 kernel: sda: sda1 sda14 sda15 | |
Jun 20 00:09:31 fv-az72-309 kernel: rtc_cmos 00:00: RTC can wake from S4 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 1:0:1:0: Attached scsi generic sg1 type 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: rtc_cmos 00:00: registered as rtc0 | |
Jun 20 00:09:31 fv-az72-309 kernel: rtc_cmos 00:00: setting system clock to 2022-06-20T00:09:30 UTC (1655683770) | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 1:0:1:0: [sdb] 29360128 512-byte logical blocks: (15.0 GB/14.0 GiB) | |
Jun 20 00:09:31 fv-az72-309 kernel: rtc_cmos 00:00: alarms up to one month, 114 bytes nvram | |
Jun 20 00:09:31 fv-az72-309 kernel: device-mapper: uevent: version 1.0.3 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 1:0:1:0: [sdb] Write Protect is off | |
Jun 20 00:09:31 fv-az72-309 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 1:0:1:0: [sdb] Mode Sense: 0f 00 10 00 | |
Jun 20 00:09:31 fv-az72-309 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: [email protected] | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 0:0:0:0: [sda] Attached SCSI disk | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Probing EISA bus 0 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: EISA: Cannot allocate resource for mainboard | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 1:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 1 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 2 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 3 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 4 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 5 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 6 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 7 | |
Jun 20 00:09:31 fv-az72-309 kernel: ata1.01: host indicates ignore ATA devices, ignored | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: Cannot allocate resource for EISA slot 8 | |
Jun 20 00:09:31 fv-az72-309 kernel: platform eisa.0: EISA: Detected 0 cards | |
Jun 20 00:09:31 fv-az72-309 kernel: intel_pstate: CPU model not supported | |
Jun 20 00:09:31 fv-az72-309 kernel: drop_monitor: Initializing network drop monitor service | |
Jun 20 00:09:31 fv-az72-309 kernel: ata1.00: host indicates ignore ATA devices, ignored | |
Jun 20 00:09:31 fv-az72-309 kernel: NET: Registered protocol family 10 | |
Jun 20 00:09:31 fv-az72-309 kernel: Segment Routing with IPv6 | |
Jun 20 00:09:31 fv-az72-309 kernel: sdb: sdb1 | |
Jun 20 00:09:31 fv-az72-309 kernel: NET: Registered protocol family 17 | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type dns_resolver registered | |
Jun 20 00:09:31 fv-az72-309 kernel: No MBM correction factor available | |
Jun 20 00:09:31 fv-az72-309 kernel: IPI shorthand broadcast: enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: sched_clock: Marking stable (1918415800, 133997700)->(2135540900, -83127400) | |
Jun 20 00:09:31 fv-az72-309 kernel: registered taskstats version 1 | |
Jun 20 00:09:31 fv-az72-309 kernel: Loading compiled-in X.509 certificates | |
Jun 20 00:09:31 fv-az72-309 kernel: Loaded X.509 cert 'Build time autogenerated kernel key: 07f5640c3d7bf043074dc27d0b5799302e473486' | |
Jun 20 00:09:31 fv-az72-309 kernel: Loaded X.509 cert 'Canonical Ltd. Live Patch Signing: 14df34d1a87cf37625abec039ef2bf521249b969' | |
Jun 20 00:09:31 fv-az72-309 kernel: Loaded X.509 cert 'Canonical Ltd. Kernel Module Signing: 88f752e560a1e0737e31163a466ad7b70a850c19' | |
Jun 20 00:09:31 fv-az72-309 kernel: blacklist: Loading compiled-in revocation X.509 certificates | |
Jun 20 00:09:31 fv-az72-309 kernel: Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing: 61482aa2830d0ab2ad5af10b7250da9033ddcef0' | |
Jun 20 00:09:31 fv-az72-309 kernel: zswap: loaded using pool lzo/zbud | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type ._fscrypt registered | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type .fscrypt registered | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type fscrypt-provisioning registered | |
Jun 20 00:09:31 fv-az72-309 kernel: sd 1:0:1:0: [sdb] Attached SCSI disk | |
Jun 20 00:09:31 fv-az72-309 kernel: Key type encrypted registered | |
Jun 20 00:09:31 fv-az72-309 kernel: AppArmor: AppArmor sha1 policy hashing enabled | |
Jun 20 00:09:31 fv-az72-309 kernel: ima: No TPM chip found, activating TPM-bypass! | |
Jun 20 00:09:31 fv-az72-309 kernel: Loading compiled-in module X.509 certificates | |
Jun 20 00:09:31 fv-az72-309 kernel: Loaded X.509 cert 'Build time autogenerated kernel key: 07f5640c3d7bf043074dc27d0b5799302e473486' | |
Jun 20 00:09:31 fv-az72-309 kernel: ima: Allocated hash algorithm: sha1 | |
Jun 20 00:09:31 fv-az72-309 kernel: ima: No architecture policies found | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: Initialising EVM extended attributes: | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.selinux | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.SMACK64 | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.SMACK64EXEC | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.SMACK64TRANSMUTE | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.SMACK64MMAP | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.apparmor | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.ima | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: security.capability | |
Jun 20 00:09:31 fv-az72-309 kernel: evm: HMAC attrs: 0x1 | |
Jun 20 00:09:31 fv-az72-309 kernel: PM: Magic number: 10:116:153 | |
Jun 20 00:09:31 fv-az72-309 kernel: RAS: Correctable Errors collector initialized. | |
Jun 20 00:09:31 fv-az72-309 kernel: md: Waiting for all devices to be available before autodetect | |
Jun 20 00:09:31 fv-az72-309 kernel: md: If you don't use raid, use raid=noautodetect | |
Jun 20 00:09:31 fv-az72-309 kernel: md: Autodetecting RAID arrays. | |
Jun 20 00:09:31 fv-az72-309 kernel: md: autorun ... | |
Jun 20 00:09:31 fv-az72-309 kernel: md: ... autorun DONE. | |
Jun 20 00:09:31 fv-az72-309 kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. | |
Jun 20 00:09:31 fv-az72-309 kernel: VFS: Mounted root (ext4 filesystem) readonly on device 8:1. | |
Jun 20 00:09:31 fv-az72-309 kernel: devtmpfs: mounted | |
Jun 20 00:09:31 fv-az72-309 kernel: Freeing unused decrypted memory: 2036K | |
Jun 20 00:09:31 fv-az72-309 kernel: Freeing unused kernel image (initmem) memory: 2608K | |
Jun 20 00:09:31 fv-az72-309 kernel: Write protecting the kernel read-only data: 26624k | |
Jun 20 00:09:31 fv-az72-309 kernel: Freeing unused kernel image (text/rodata gap) memory: 2036K | |
Jun 20 00:09:31 fv-az72-309 kernel: Freeing unused kernel image (rodata/data gap) memory: 460K | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/mm: Checking user space page tables | |
Jun 20 00:09:31 fv-az72-309 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. | |
Jun 20 00:09:31 fv-az72-309 kernel: Run /sbin/init as init process | |
Jun 20 00:09:31 fv-az72-309 kernel: with arguments: | |
Jun 20 00:09:31 fv-az72-309 kernel: /sbin/init | |
Jun 20 00:09:31 fv-az72-309 kernel: with environment: | |
Jun 20 00:09:31 fv-az72-309 kernel: HOME=/ | |
Jun 20 00:09:31 fv-az72-309 kernel: TERM=linux | |
Jun 20 00:09:31 fv-az72-309 kernel: BOOT_IMAGE=/boot/vmlinuz-5.13.0-1029-azure | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Inserted module 'autofs4' | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: systemd 245.4-4ubuntu3.17 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid) | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Detected virtualization microsoft. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Detected architecture x86-64. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Set hostname to <fv-az72-309>. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Unnecessary job for /sys/devices/virtual/misc/vmbus!hv_fcopy was removed. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Unnecessary job for /sys/devices/virtual/misc/vmbus!hv_vss was removed. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Created slice Slice for Azure VM Agent and Extensions. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Created slice system-modprobe.slice. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Created slice system-serial\x2dgetty.slice. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Created slice system-systemd\x2dfsck.slice. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Created slice User and Session Slice. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Started Forward Password Requests to Wall Directory Watch. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Reached target User and Group Name Lookups. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Reached target Slices. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Reached target Swap. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Reached target System Time Set. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on Device-mapper event daemon FIFOs. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on LVM2 poll daemon socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on multipathd control socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on Syslog Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on fsck to fsckd communication Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on initctl Compatibility Named Pipe. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on Journal Audit Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on Journal Socket (/dev/log). | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on Journal Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on Network Service Netlink Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on udev Control Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Listening on udev Kernel Socket. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounting Huge Pages File System... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounting POSIX Message Queue File System... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounting Kernel Debug File System... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounting Kernel Trace File System... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Journal Service... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Set the console keyboard layout... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Create list of static device nodes for the current kernel... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Load Kernel Module drm... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting File System Check on Root Device... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Load Kernel Modules... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting udev Coldplug all Devices... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Uncomplicated firewall... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Setup network rules for WALinuxAgent... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounted Huge Pages File System. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounted POSIX Message Queue File System. | |
Jun 20 00:09:31 fv-az72-309 systemd-journald[189]: Journal started | |
Jun 20 00:09:31 fv-az72-309 systemd-journald[189]: Runtime Journal (/run/log/journal/3d8d945fc71147a483fa20cb6792de9d) is 8.0M, max 69.4M, 61.4M free. | |
Jun 20 00:09:31 fv-az72-309 kernel: IPMI message handler: version 39.2 | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounted Kernel Debug File System. | |
Jun 20 00:09:31 fv-az72-309 systemd-modules-load[201]: Inserted module 'msr' | |
Jun 20 00:09:31 fv-az72-309 systemd-fsck[202]: cloudimg-rootfs: clean, 1072970/11096064 files, 14540324/22515963 blocks | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Started Journal Service. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounted Kernel Trace File System. | |
Jun 20 00:09:31 fv-az72-309 systemd-modules-load[201]: Inserted module 'ipmi_devintf' | |
Jun 20 00:09:31 fv-az72-309 kernel: ipmi device interface | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Create list of static device nodes for the current kernel. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: [email protected]: Succeeded. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Load Kernel Module drm. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished File System Check on Root Device. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Load Kernel Modules. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Uncomplicated firewall. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounting FUSE Control File System... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounting Kernel Configuration File System... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Started File System Check Daemon to report status. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Remount Root and Kernel File Systems... | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Apply Kernel Variables... | |
Jun 20 00:09:31 fv-az72-309 systemd-sysctl[214]: Not setting net/ipv4/conf/all/promote_secondaries (explicit setting exists). | |
Jun 20 00:09:31 fv-az72-309 systemd-sysctl[214]: Not setting net/ipv4/conf/default/promote_secondaries (explicit setting exists). | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Set the console keyboard layout. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounted FUSE Control File System. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Mounted Kernel Configuration File System. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Apply Kernel Variables. | |
Jun 20 00:09:31 fv-az72-309 kernel: EXT4-fs (sda1): re-mounted. Opts: discard. Quota mode: none. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Finished Remount Root and Kernel File Systems. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped. | |
Jun 20 00:09:31 fv-az72-309 systemd[1]: Starting Flush Journal to Persistent Storage... | |
Jun 20 00:09:32 fv-az72-309 systemd-journald[189]: Time spent on flushing to /var/log/journal/3d8d945fc71147a483fa20cb6792de9d is 35.886ms for 559 entries. | |
Jun 20 00:09:32 fv-az72-309 systemd-journald[189]: System Journal (/var/log/journal/3d8d945fc71147a483fa20cb6792de9d) is 8.0M, max 4.0G, 3.9G free. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Condition check resulted in Platform Persistent Storage Archival being skipped. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Starting Load/Save Random Seed... | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Starting Create System Users... | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Finished udev Coldplug all Devices. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Starting udev Wait for Complete Device Initialization... | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Finished Load/Save Random Seed. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Finished Create System Users. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Starting Create Static Device Nodes in /dev... | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Finished Create Static Device Nodes in /dev. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Starting udev Kernel Device Manager... | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Finished Flush Journal to Persistent Storage. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Started udev Kernel Device Manager. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Condition check resulted in Show Plymouth Boot Screen being skipped. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Started Dispatch Password Requests to Console Directory Watch. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Condition check resulted in Forward Password Requests to Plymouth Directory Watch being skipped. | |
Jun 20 00:09:32 fv-az72-309 systemd[1]: Reached target Local Encrypted Volumes. | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_vmbus: registering driver hyperv_fb | |
Jun 20 00:09:32 fv-az72-309 kernel: hyperv_fb: Synthvid Version major 3, minor 5 | |
Jun 20 00:09:32 fv-az72-309 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 | |
Jun 20 00:09:32 fv-az72-309 kernel: hyperv_fb: Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead. | |
Jun 20 00:09:32 fv-az72-309 kernel: Console: switching to colour frame buffer device 128x48 | |
Jun 20 00:09:32 fv-az72-309 kernel: hid: raw HID events driver (C) Jiri Kosina | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_vmbus: registering driver hyperv_keyboard | |
Jun 20 00:09:32 fv-az72-309 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:07/VMBUS:01/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio2/input/input3 | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_vmbus: registering driver hid_hyperv | |
Jun 20 00:09:32 fv-az72-309 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input4 | |
Jun 20 00:09:32 fv-az72-309 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_vmbus: registering driver hv_balloon | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_utils: Registering HyperV Utility Driver | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_vmbus: registering driver hv_utils | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_utils: Heartbeat IC version 3.0 | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_utils: TimeSync IC version 4.0 | |
Jun 20 00:09:32 fv-az72-309 kernel: hv_utils: Shutdown IC version 3.2 | |
Jun 20 00:09:35 fv-az72-309 kernel: hv_vmbus: registering driver hv_netvsc | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Found device /dev/ttyS0. | |
Jun 20 00:09:35 fv-az72-309 kernel: cryptd: max_cpu_qlen set to 1000 | |
Jun 20 00:09:35 fv-az72-309 kernel: AVX2 version of gcm_enc/dec engaged. | |
Jun 20 00:09:35 fv-az72-309 kernel: AES CTR mode by8 optimization enabled | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. | |
Jun 20 00:09:35 fv-az72-309 systemd-udevd[225]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:09:35 fv-az72-309 udevadm[221]: systemd-udev-settle.service is deprecated. | |
Jun 20 00:09:35 fv-az72-309 kernel: bpfilter: Loaded bpfilter_umh pid 297 | |
Jun 20 00:09:35 fv-az72-309 unknown: Started bpfilter | |
Jun 20 00:09:35 fv-az72-309 systemd-udevd[227]: Using default interface naming scheme 'v245'. | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Found device /sys/devices/virtual/misc/vmbus!hv_kvp. | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Started Hyper-V KVP Protocol Daemon. | |
Jun 20 00:09:35 fv-az72-309 python3[217]: Setting up firewall for the WALinux Agent with args: {'dst_ip': '168.63.129.16', 'uid': '0', 'wait': '-w'} | |
Jun 20 00:09:35 fv-az72-309 python3[217]: Successfully set the firewall rules | |
Jun 20 00:09:35 fv-az72-309 KVP[317]: KVP starting; pid is:317 | |
Jun 20 00:09:35 fv-az72-309 KVP[317]: KVP LIC Version: 3.1 | |
Jun 20 00:09:35 fv-az72-309 kernel: hv_utils: KVP IC version 4.0 | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: walinuxagent-network-setup.service: Succeeded. | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Finished Setup network rules for WALinuxAgent. | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Found device Virtual_Disk UEFI. | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Found device Virtual_Disk Temporary_Storage. | |
Jun 20 00:09:35 fv-az72-309 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. | |
Jun 20 00:09:36 fv-az72-309 systemd-udevd[227]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished udev Wait for Complete Device Initialization. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Device-Mapper Multipath Device Controller... | |
Jun 20 00:09:36 fv-az72-309 kernel: alua: device handler registered | |
Jun 20 00:09:36 fv-az72-309 kernel: emc: device handler registered | |
Jun 20 00:09:36 fv-az72-309 kernel: rdac: device handler registered | |
Jun 20 00:09:36 fv-az72-309 multipathd[399]: --------start up-------- | |
Jun 20 00:09:36 fv-az72-309 multipathd[399]: read /etc/multipath.conf | |
Jun 20 00:09:36 fv-az72-309 multipathd[399]: path checkers start up | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Started Device-Mapper Multipath Device Controller. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Reached target Local File Systems (Pre). | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounting Mount unit for core20, revision 1518... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounting Mount unit for lxd, revision 22753... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounting Mount unit for snapd, revision 16010... | |
Jun 20 00:09:36 fv-az72-309 kernel: loop0: detected capacity change from 0 to 126824 | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting File System Check on /dev/disk/by-uuid/7167-9500... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting File System Check on /dev/disk/cloud/azure_resource-part1... | |
Jun 20 00:09:36 fv-az72-309 kernel: loop1: detected capacity change from 0 to 96160 | |
Jun 20 00:09:36 fv-az72-309 systemd-fsck[411]: sdb1: fsck.ntfs doesn't exist, not checking file system. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounted Mount unit for core20, revision 1518. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished File System Check on /dev/disk/cloud/azure_resource-part1. | |
Jun 20 00:09:36 fv-az72-309 kernel: loop2: detected capacity change from 0 to 138880 | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounted Mount unit for snapd, revision 16010. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounted Mount unit for lxd, revision 22753. | |
Jun 20 00:09:36 fv-az72-309 systemd-fsck[417]: fsck.fat 4.1 (2017-01-24) | |
Jun 20 00:09:36 fv-az72-309 systemd-fsck[417]: /dev/sda15: 12 files, 10642/213716 clusters | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished File System Check on /dev/disk/by-uuid/7167-9500. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounting /boot/efi... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounted /boot/efi. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Reached target Local File Systems. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Load AppArmor profiles... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Enable support for additional executable binary formats... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Set console font and keymap... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Create final runtime dir for shutdown pivot root... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Condition check resulted in LXD - agent - 9p mount being skipped. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Condition check resulted in LXD - agent being skipped. | |
Jun 20 00:09:36 fv-az72-309 apparmor.systemd[425]: Restarting AppArmor | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data... | |
Jun 20 00:09:36 fv-az72-309 apparmor.systemd[425]: Reloading AppArmor profiles | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Create Volatile Files and Directories... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Set console font and keymap. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Create final runtime dir for shutdown pivot root. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: plymouth-read-write.service: Succeeded. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Tell Plymouth To Write Out Runtime Data. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Create Volatile Files and Directories. | |
Jun 20 00:09:36 fv-az72-309 apparmor.systemd[439]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd | |
Jun 20 00:09:36 fv-az72-309 apparmor.systemd[440]: Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox | |
Jun 20 00:09:36 fv-az72-309 audit[438]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="lsb_release" pid=438 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 427 (update-binfmts) | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounting Arbitrary Executable File Formats File System... | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.240:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lsb_release" pid=438 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Update UTMP about System Boot/Shutdown... | |
Jun 20 00:09:36 fv-az72-309 audit[437]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[437]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[437]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[437]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/{,usr/}sbin/dhclient" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.248:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.248:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.248:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.248:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/{,usr/}sbin/dhclient" pid=437 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[441]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/haveged" pid=441 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.252:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/haveged" pid=441 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[444]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/mysqld" pid=444 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.260:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/mysqld" pid=444 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[445]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=445 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[445]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=445 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[445]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=445 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Update UTMP about System Boot/Shutdown. | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.264:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=445 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 kernel: audit: type=1400 audit(1655683776.264:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=445 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[448]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/chronyd" pid=448 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Mounted Arbitrary Executable File Formats File System. | |
Jun 20 00:09:36 fv-az72-309 audit[447]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=447 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[447]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=447 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[450]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=450 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[450]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe//kmod" pid=450 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[449]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/sbin/tcpdump" pid=449 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Load AppArmor profiles. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Enable support for additional executable binary formats. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Started Entropy daemon using the HAVEGE algorithm. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Load AppArmor profiles managed internally by snapd... | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Condition check resulted in Authentication service for virtual machines hosted on VMware being skipped. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Condition check resulted in Service for virtual machines hosted on VMware being skipped. | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Starting Initial cloud-init job (pre-networking)... | |
Jun 20 00:09:36 fv-az72-309 sh[454]: + [ -e /var/lib/cloud/instance/obj.pkl ] | |
Jun 20 00:09:36 fv-az72-309 sh[454]: + echo cleaning persistent cloud-init object | |
Jun 20 00:09:36 fv-az72-309 sh[454]: cleaning persistent cloud-init object | |
Jun 20 00:09:36 fv-az72-309 sh[454]: + rm /var/lib/cloud/instance/obj.pkl | |
Jun 20 00:09:36 fv-az72-309 haveged[451]: haveged starting up | |
Jun 20 00:09:36 fv-az72-309 sh[454]: + exit 0 | |
Jun 20 00:09:36 fv-az72-309 audit[465]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap-update-ns.lxd" pid=465 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[464]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/snap/snapd/16010/usr/lib/snapd/snap-confine" pid=464 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[464]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="/snap/snapd/16010/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=464 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[466]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.activate" pid=466 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[467]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.benchmark" pid=467 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[468]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.buginfo" pid=468 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[469]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.check-kernel" pid=469 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[470]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.daemon" pid=470 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[471]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.hook.configure" pid=471 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[472]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.hook.install" pid=472 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[473]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.hook.remove" pid=473 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[474]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.lxc" pid=474 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[475]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.lxc-to-lxd" pid=475 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[476]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.lxd" pid=476 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 audit[477]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.migrate" pid=477 comm="apparmor_parser" | |
Jun 20 00:09:36 fv-az72-309 systemd[1]: Finished Load AppArmor profiles managed internally by snapd. | |
Jun 20 00:09:36 fv-az72-309 haveged[451]: haveged: ver: 1.9.1; arch: x86; vend: GenuineIntel; build: (gcc 8.3.0 ITV); collect: 128K | |
Jun 20 00:09:36 fv-az72-309 haveged[451]: haveged: cpu: (L4 VC); data: 32K (L4 V); inst: 32K (L4 V); idx: 24/40; sz: 31410/52825 | |
Jun 20 00:09:36 fv-az72-309 haveged[451]: haveged: tot tests(BA8): A:1/1 B:1/1 continuous tests(B): last entropy estimate 8.00164 | |
Jun 20 00:09:36 fv-az72-309 haveged[451]: haveged: fills: 0, generated: 0 | |
Jun 20 00:09:37 fv-az72-309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: Internet Systems Consortium DHCP Client 4.4.1 | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: Copyright 2004-2018 Internet Systems Consortium. | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: All rights reserved. | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: For info, please visit https://www.isc.org/software/dhcp/ | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: Listening on LPF/eth0/00:0d:3a:13:05:24 | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: Sending on LPF/eth0/00:0d:3a:13:05:24 | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: Sending on Socket/fallback | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0x13264032) | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: DHCPOFFER of 10.1.0.17 from 168.63.129.16 | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: DHCPREQUEST for 10.1.0.17 on eth0 to 255.255.255.255 port 67 (xid=0x32402613) | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: DHCPACK of 10.1.0.17 from 168.63.129.16 (xid=0x13264032) | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: Timeout too large reducing to: 2147483646 (TIME_MAX - 1) | |
Jun 20 00:09:37 fv-az72-309 dhclient[491]: bound to 10.1.0.17 -- renewal in 4294967295 seconds. | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped. | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Starting Setup network rules for WALinuxAgent... | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped. | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped. | |
Jun 20 00:09:37 fv-az72-309 cloud-init[481]: Cloud-init v. 22.2-0ubuntu1~20.04.2 running 'init-local' at Mon, 20 Jun 2022 00:09:36 +0000. Up 5.24 seconds. | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Finished Initial cloud-init job (pre-networking). | |
Jun 20 00:09:37 fv-az72-309 python3[522]: Setting up firewall for the WALinux Agent with args: {'dst_ip': '168.63.129.16', 'uid': '0', 'wait': '-w'} | |
Jun 20 00:09:37 fv-az72-309 python3[522]: Successfully set the firewall rules | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: walinuxagent-network-setup.service: Succeeded. | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Finished Setup network rules for WALinuxAgent. | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Reached target Network (Pre). | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Starting Network Service... | |
Jun 20 00:09:37 fv-az72-309 systemd-networkd[532]: Enumeration completed | |
Jun 20 00:09:37 fv-az72-309 systemd-networkd[532]: eth0: Link UP | |
Jun 20 00:09:37 fv-az72-309 systemd[1]: Started Network Service. | |
Jun 20 00:09:37 fv-az72-309 systemd-networkd[532]: eth0: Gained carrier | |
Jun 20 00:09:38 fv-az72-309 systemd-networkd[532]: eth0: Link DOWN | |
Jun 20 00:09:38 fv-az72-309 systemd-networkd[532]: eth0: Lost carrier | |
Jun 20 00:09:38 fv-az72-309 systemd-networkd[532]: eth0: IPv6 successfully enabled | |
Jun 20 00:09:38 fv-az72-309 systemd[1]: Starting Wait for Network to be Configured... | |
Jun 20 00:09:38 fv-az72-309 systemd-networkd[532]: eth0: Link UP | |
Jun 20 00:09:38 fv-az72-309 systemd[1]: Starting Network Name Resolution... | |
Jun 20 00:09:38 fv-az72-309 systemd-networkd[532]: eth0: Gained carrier | |
Jun 20 00:09:38 fv-az72-309 systemd-networkd[532]: eth0: DHCPv4 address 10.1.0.17/16 via 10.1.0.1 | |
Jun 20 00:09:38 fv-az72-309 systemd-resolved[534]: Positive Trust Anchors: | |
Jun 20 00:09:38 fv-az72-309 systemd-resolved[534]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d | |
Jun 20 00:09:38 fv-az72-309 systemd-resolved[534]: Negative trust anchors: 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test | |
Jun 20 00:09:38 fv-az72-309 systemd-resolved[534]: Using system hostname 'fv-az72-309'. | |
Jun 20 00:09:38 fv-az72-309 systemd[1]: Started Network Name Resolution. | |
Jun 20 00:09:38 fv-az72-309 systemd[1]: Reached target Network. | |
Jun 20 00:09:38 fv-az72-309 systemd[1]: Reached target Host and Network Name Lookups. | |
Jun 20 00:09:39 fv-az72-309 systemd-networkd[532]: eth0: Gained IPv6LL | |
Jun 20 00:09:39 fv-az72-309 systemd-networkd-wait-online[533]: managing: eth0 | |
Jun 20 00:09:39 fv-az72-309 systemd[1]: Finished Wait for Network to be Configured. | |
Jun 20 00:09:39 fv-az72-309 systemd[1]: Starting Initial cloud-init job (metadata service crawler)... | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: Cloud-init v. 22.2-0ubuntu1~20.04.2 running 'init' at Mon, 20 Jun 2022 00:09:39 +0000. Up 7.70 seconds. | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +--------+------+----------------------------+-------------+--------+-------------------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +--------+------+----------------------------+-------------+--------+-------------------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | eth0 | True | 10.1.0.17 | 255.255.0.0 | global | 00:0d:3a:13:05:24 | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | eth0 | True | fe80::20d:3aff:fe13:524/64 | . | link | 00:0d:3a:13:05:24 | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | lo | True | ::1/128 | . | host | . | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +--------+------+----------------------------+-------------+--------+-------------------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 0 | 0.0.0.0 | 10.1.0.1 | 0.0.0.0 | eth0 | UG | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 1 | 10.1.0.0 | 0.0.0.0 | 255.255.0.0 | eth0 | U | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 2 | 168.63.129.16 | 10.1.0.1 | 255.255.255.255 | eth0 | UGH | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 3 | 169.254.169.254 | 10.1.0.1 | 255.255.255.255 | eth0 | UGH | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +-------+-------------+---------+-----------+-------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | Route | Destination | Gateway | Interface | Flags | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +-------+-------------+---------+-----------+-------+ | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 3 | local | :: | eth0 | U | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: | 4 | multicast | :: | eth0 | U | | |
Jun 20 00:09:39 fv-az72-309 cloud-init[540]: ci-info: +-------+-------------+---------+-----------+-------+ | |
Jun 20 00:09:39 fv-az72-309 ntfs-3g[551]: Version 2017.3.23AR.3 integrated FUSE 28 | |
Jun 20 00:09:39 fv-az72-309 ntfs-3g[551]: Mounted /dev/sdb1 (Read-Only, label "Temporary Storage", NTFS 3.1) | |
Jun 20 00:09:39 fv-az72-309 ntfs-3g[551]: Cmdline options: ro | |
Jun 20 00:09:39 fv-az72-309 ntfs-3g[551]: Mount options: ro,allow_other,nonempty,relatime,fsname=/dev/sdb1,blkdev,blksize=4096 | |
Jun 20 00:09:39 fv-az72-309 ntfs-3g[551]: Ownership and permissions disabled, configuration type 7 | |
Jun 20 00:09:39 fv-az72-309 ntfs-3g[551]: Unmounting /dev/sdb1 (Temporary Storage) | |
Jun 20 00:09:39 fv-az72-309 systemd[1]: run-cloud\x2dinit-tmp-tmp5pbcsij7.mount: Succeeded. | |
Jun 20 00:09:39 fv-az72-309 kernel: sdb: sdb1 | |
Jun 20 00:09:39 fv-az72-309 systemd[1]: systemd-fsck@dev-disk-cloud-azure_resource\x2dpart1.service: Succeeded. | |
Jun 20 00:09:39 fv-az72-309 systemd[1]: Stopped File System Check on /dev/disk/cloud/azure_resource-part1. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Starting Tell Plymouth To Write Out Runtime Data... | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Show Plymouth Boot Screen being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Forward Password Requests to Plymouth Directory Watch being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Condition check resulted in Platform Persistent Storage Archival being skipped. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: plymouth-read-write.service: Succeeded. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Finished Tell Plymouth To Write Out Runtime Data. | |
Jun 20 00:09:40 fv-az72-309 kernel: EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:09:40 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Generating public/private rsa key pair. | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key fingerprint is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: SHA256:xgHytXr9AoLrM03Ufz16C0Uv9NChaKGn5zsbtoCu2Gg root@fv-az72-309 | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key's randomart image is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +---[RSA 3072]----+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | . . . . . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o o .. o ...| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | ..o. + .+ .| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | ..o.o+ o + | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | ..o So.. .o o| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | ..+..+...o. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .o . ..=o. . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | E* o oo=.. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .o.=.. +o... | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +----[SHA256]-----+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Generating public/private dsa key pair. | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your identification has been saved in /etc/ssh/ssh_host_dsa_key | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key fingerprint is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: SHA256:XGcH1MwJBdIF6Dewo/X0osa6enb06nIxLFk5/q6UAm8 root@fv-az72-309 | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key's randomart image is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +---[DSA 1024]----+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .+*Xo. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o...= | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | ..+o . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | . .Oo+. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | . SB * o | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o+ *.o . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | E+o* . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .+o* o | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .+oO++. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +----[SHA256]-----+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Generating public/private ecdsa key pair. | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key fingerprint is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: SHA256:6mlnK30thyw5xPDk04GqP6xQuDIEULkyXUpZlLeH/us root@fv-az72-309 | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key's randomart image is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +---[ECDSA 256]---+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: |...=o. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: |. + o . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: |.o + . o . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: |+ +. + + . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | +. .. OS. . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: |. o o.* . | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: |o o o.+ + o | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o ...=.O = o | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .++*E* o | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +----[SHA256]-----+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Generating public/private ed25519 key pair. | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key fingerprint is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: SHA256:CkHtFnUi6AlzMN/rV+bhb10BzsSd4ft4SZpQoIuutqU root@fv-az72-309 | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: The key's randomart image is: | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +--[ED25519 256]--+ | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o..o..o ... ..o| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o+o.o. o. .+.o | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | =oo.. . +... | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o.o.. . .o ..| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | ....S.+. o.| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | o.. = .. +.+| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | oo. o +.oo| | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .+. .. .. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: | .E. .. | | |
Jun 20 00:09:41 fv-az72-309 cloud-init[540]: +----[SHA256]-----+ | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Initial cloud-init job (metadata service crawler). | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Cloud-config availability. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Network is Online. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target System Initialization. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Trigger to poll for Ubuntu Pro licenses (Only enabled on GCP LTS non-pro). | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Daily Cleanup of Temporary Directories. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Ubuntu Advantage Timer for running repeated jobs. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Paths. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Unix socket for apport crash forwarding being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on cloud-init hotplug hook socket. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on D-Bus System Message Bus Socket. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Docker Socket for the API. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on Open-iSCSI iscsid Socket. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on Socket unix for snap application lxd.daemon. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Socket activation for snappy daemon. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on UUID daemon activation socket. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Login to default iSCSI targets being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Remote File Systems (Pre). | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Remote File Systems. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Availability of block devices... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on Docker Socket for the API. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Listening on Socket activation for snappy daemon. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Availability of block devices. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Sockets. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Basic System. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Accounts Service... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting LSB: automatic crash report generation... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Deferred execution scheduler... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting chrony, an NTP client/server... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting containerd container runtime... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Regular background program processing daemon. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started D-Bus System Message Bus. | |
Jun 20 00:09:41 fv-az72-309 cron[655]: (CRON) INFO (pidfile fd = 3) | |
Jun 20 00:09:41 fv-az72-309 cron[655]: (CRON) INFO (Running @reboot jobs) | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Save initial kernel messages after boot. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Remove Stale Online ext4 Metadata Check Snapshots... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in getty on tty2-tty6 if dbus and logind are not available being skipped. | |
Jun 20 00:09:41 fv-az72-309 chronyd-starter.sh[660]: WARNING: libcap needs an update (cap=40 should have a name). | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Record successful boot for GRUB... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Hyper-V File Copy Protocol Daemon being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Hyper-V VSS Protocol Daemon being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started irqbalance daemon. | |
Jun 20 00:09:41 fv-az72-309 chronyd[666]: chronyd version 3.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 -DEBUG) | |
Jun 20 00:09:41 fv-az72-309 chronyd[666]: Frequency -44.376 +/- 1.296 ppm read from /var/lib/chrony/chrony.drift | |
Jun 20 00:09:41 fv-az72-309 chronyd[666]: Loaded seccomp filter | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting LSB: Mono XSP4... | |
Jun 20 00:09:41 fv-az72-309 dbus-daemon[658]: [system] AppArmor D-Bus mediation is enabled | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Dispatcher daemon for systemd-networkd... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Set the CPU Frequency Scaling governor being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting The PHP 7.4 FastCGI Process Manager... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting The PHP 8.0 FastCGI Process Manager... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting The PHP 8.1 FastCGI Process Manager... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Authorization Manager... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Pollinate to seed the pseudo random number generator being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in fast remote file copy program daemon being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting System Logging Service... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Job that runs the Runner Provisioner. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Secure Boot updates for DB and DBX being skipped. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started session-manager-plugin. | |
Jun 20 00:09:41 fv-az72-309 apport[649]: * Starting automatic crash report generation: apport | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Service for snap application lxd.activate... | |
Jun 20 00:09:41 fv-az72-309 rsyslogd[682]: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.2001.0] | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Automatically repair incorrect owner/permissions on core devices being skipped. | |
Jun 20 00:09:41 fv-az72-309 rsyslogd[682]: rsyslogd's groupid changed to 110 | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Wait for the Ubuntu Core chooser trigger being skipped. | |
Jun 20 00:09:41 fv-az72-309 rsyslogd[682]: rsyslogd's userid changed to 104 | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Snap Daemon... | |
Jun 20 00:09:41 fv-az72-309 rsyslogd[682]: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="682" x-info="https://www.rsyslog.com"] start | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting OpenBSD Secure Shell server... | |
Jun 20 00:09:41 fv-az72-309 dbus-daemon[658]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.2' (uid=0 pid=647 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined") | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Resets System Activity Data Collector... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Login Service... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Permit User Sessions... | |
Jun 20 00:09:41 fv-az72-309 python3[704]: /usr/sbin/waagent:27: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses | |
Jun 20 00:09:41 fv-az72-309 python3[704]: import imp | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Condition check resulted in Ubuntu Advantage reboot cmds being skipped. | |
Jun 20 00:09:42 fv-az72-309 apport[649]: ...done. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Disk Manager... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Azure Linux Agent. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Deferred execution scheduler. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Resets System Activity Data Collector. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Permit User Sessions. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Hold until boot process finishes up... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Terminate Plymouth Boot Screen... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started System Logging Service. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: plymouth-quit-wait.service: Succeeded. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Hold until boot process finishes up. | |
Jun 20 00:09:41 fv-az72-309 udisksd[703]: udisks daemon version 2.8.4 starting | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Serial Getty on ttyS0. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting Set console scheme... | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: plymouth-quit.service: Succeeded. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Terminate Plymouth Boot Screen. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Set console scheme. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Created slice system-getty.slice. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Started Getty on tty1. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Reached target Login Prompts. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: grub-common.service: Succeeded. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Finished Record successful boot for GRUB. | |
Jun 20 00:09:41 fv-az72-309 systemd[1]: Starting GRUB failed boot detection... | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: e2scrub_reap.service: Succeeded. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Finished Remove Stale Online ext4 Metadata Check Snapshots. | |
Jun 20 00:09:42 fv-az72-309 mono-xsp4[671]: * Starting XSP 4.0 WebServer mono-xsp4 | |
Jun 20 00:09:42 fv-az72-309 mono-xsp4[671]: ...done. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started LSB: automatic crash report generation. | |
Jun 20 00:09:42 fv-az72-309 polkitd[681]: started daemon version 0.105 using authority implementation `local' version `0.105' | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started LSB: Mono XSP4. | |
Jun 20 00:09:42 fv-az72-309 dbus-daemon[658]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Authorization Manager. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Starting Modem Manager... | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: grub-initrd-fallback.service: Succeeded. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Finished GRUB failed boot detection. | |
Jun 20 00:09:42 fv-az72-309 sshd[738]: Server listening on 0.0.0.0 port 22. | |
Jun 20 00:09:42 fv-az72-309 accounts-daemon[647]: started daemon version 0.6.55 | |
Jun 20 00:09:42 fv-az72-309 sshd[738]: Server listening on :: port 22. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Accounts Service. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started OpenBSD Secure Shell server. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started chrony, an NTP client/server. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Reached target System Time Synchronized. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Periodic ext4 Online Metadata Check for All Filesystems. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Discard unused blocks once a week. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Daily rotation of log files. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Daily man-db regeneration. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Message of the Day. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Clean PHP session files every 30 mins. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Condition check resulted in Timer to automatically fetch and run repair assertions being skipped. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Reached target Timers. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Starting LSB: Fast standalone full-text SQL search engine... | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab... | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Starting Clean php session files... | |
Jun 20 00:09:42 fv-az72-309 sphinxsearch[759]: To enable sphinxsearch, edit /etc/default/sphinxsearch and set START=yes | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.187027Z INFO Daemon Azure Linux Agent Version:2.2.46 | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.190939Z INFO Daemon OS: ubuntu 20.04 | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.194146Z INFO Daemon Python: 3.8.10 | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.197558Z INFO Daemon CGroups Status: The cgroup filesystem is ready to use | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.207982Z INFO Daemon Run daemon | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Starting Rotate log files... | |
Jun 20 00:09:42 fv-az72-309 session-manager-plugin[687]: The Session Manager plugin was installed successfully. Use the AWS CLI to start a session. | |
Jun 20 00:09:42 fv-az72-309 udisksd[703]: failed to load module mdraid: libbd_mdraid.so.2: cannot open shared object file: No such file or directory | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Starting Daily man-db regeneration... | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started LSB: Fast standalone full-text SQL search engine. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: session-manager-plugin.service: Succeeded. | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.226326Z INFO Daemon cloud-init is enabled: True | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.230013Z INFO Daemon Using cloud-init for provisioning | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.233982Z INFO Daemon Activate resource disk | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.236941Z INFO Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.244158Z INFO Daemon Found device: sdb | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.253998Z INFO Daemon Resource disk [/dev/sdb1] is already mounted [/mnt] | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.269361Z INFO Daemon Enable swap | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.278833Z INFO Daemon Create swap file | |
Jun 20 00:09:42 fv-az72-309 udisksd[703]: Failed to load the 'mdraid' libblockdev plugin | |
Jun 20 00:09:42 fv-az72-309 ModemManager[751]: <info> ModemManager (version 1.16.6) starting in system bus... | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.336492Z INFO Daemon Enabled 4194304KB of swap at /mnt/swapfile | |
Jun 20 00:09:42 fv-az72-309 kernel: Adding 4194300k swap on /mnt/swapfile. Priority:-2 extents:9 across:4505596k FS | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.341954Z INFO Daemon Clean protocol and wireserver endpoint | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.350334Z WARNING Daemon VM is provisioned, but the VM unique identifier has changed -- clearing cached state | |
Jun 20 00:09:42 fv-az72-309 python3[704]: WARNING! Cached DHCP leases will be deleted. | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.365193Z INFO Daemon Detect protocol endpoints | |
Jun 20 00:09:42 fv-az72-309 systemd-logind[696]: New seat seat0. | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.373259Z INFO Daemon Clean protocol and wireserver endpoint | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.380413Z INFO Daemon WireServer endpoint is not found. Rerun dhcp handler | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.387481Z INFO Daemon Test for route to 168.63.129.16 | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.391519Z INFO Daemon Route to 168.63.129.16 exists | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.397597Z INFO Daemon Wire server endpoint:168.63.129.16 | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.408410Z INFO Daemon Fabric preferred wire protocol version:2015-04-05 | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.416940Z INFO Daemon Wire protocol version:2012-11-30 | |
Jun 20 00:09:42 fv-az72-309 systemd-logind[696]: Watching system buttons on /dev/input/event0 (Power Button) | |
Jun 20 00:09:42 fv-az72-309 systemd-logind[696]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) | |
Jun 20 00:09:42 fv-az72-309 systemd-logind[696]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Login Service. | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.427715Z INFO Daemon Server preferred version:2015-04-05 | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Disk Manager. | |
Jun 20 00:09:42 fv-az72-309 udisksd[703]: Acquired the name org.freedesktop.UDisks2 on the system message bus | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.672474Z INFO Daemon Found private key matching thumbprint 33D7A199EB5A54B287B160824E7C8404A806D41B | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.689458Z INFO Daemon Provisioning already completed, skipping. | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.694154Z INFO Daemon RDMA capabilities are not enabled, skipping | |
Jun 20 00:09:42 fv-az72-309 python3[704]: 2022-06-20T00:09:42.718900Z INFO Daemon Determined Agent WALinuxAgent-2.7.1.0 to be the latest agent | |
Jun 20 00:09:42 fv-az72-309 networkd-dispatcher[674]: No valid path found for iwconfig | |
Jun 20 00:09:42 fv-az72-309 networkd-dispatcher[674]: No valid path found for iw | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Modem Manager. | |
Jun 20 00:09:42 fv-az72-309 snapd[691]: AppArmor status: apparmor is enabled and all features are available | |
Jun 20 00:09:42 fv-az72-309 rsyslogd[682]: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="682" x-info="https://www.rsyslog.com"] rsyslogd was HUPed | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: tmp-snap.rootfs_xmixJk.mount: Succeeded. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: logrotate.service: Succeeded. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Finished Rotate log files. | |
Jun 20 00:09:42 fv-az72-309 systemd[1]: Started Dispatcher daemon for systemd-networkd. | |
Jun 20 00:09:43 fv-az72-309 fstrim[761]: /mnt: 9.7 GiB (10361049088 bytes) trimmed on /dev/disk/cloud/azure_resource-part1 | |
Jun 20 00:09:43 fv-az72-309 fstrim[761]: /boot/efi: 99.2 MiB (103973888 bytes) trimmed on /dev/sda15 | |
Jun 20 00:09:43 fv-az72-309 fstrim[761]: /: 28.1 GiB (30198099968 bytes) trimmed on /dev/sda1 | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: fstrim.service: Succeeded. | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: Finished Discard unused blocks on filesystems from /etc/fstab. | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.179944401Z" level=info msg="starting containerd" revision=a17ec496a95e55601607ca50828147e8ccaeebf1 version=1.5.13+azure-1 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.302251Z INFO ExtHandler ExtHandler The agent will now check for updates and then will process extensions. Output to /dev/console will be suspended during those operations. | |
Jun 20 00:09:43 fv-az72-309 php8.1[818]: DIGEST-MD5 common mech free | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.406511055Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.451579Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.7.1.0 is running as the goal state agent | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.451819Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.451898Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.454641191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 kernel: aufs 5.x-rcN-20210809 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.471670046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.471855Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.124 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.472369077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.472410Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 6fb7f8f8-f91a-4429-ab63-9fab2c1e8be9 New eTag: 14923850863347568451] | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.472526Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.472720293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.472881100Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.473033307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.474079553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.474682780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.475538618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.475888234Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.476514761Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.476707770Z" level=info msg="metadata content store policy set" policy=shared | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.486732115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.486914223Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.490472781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.490686490Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.490810996Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.490913200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491011805Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491108009Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491201813Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491306818Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491401422Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491593831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.491756338Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.492188057Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.492393266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.492556173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.492655378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.492759482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.492992893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.493099297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.493189501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.493284406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.493401011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.493495215Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.495787517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.495956024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.496069929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.496175234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.496482347Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[BinaryName: CriuImagePath: CriuPath: CriuWorkPath: IoGid:0 IoUid:0 NoNewKeyring:false NoPivotRoot:false Root: ShimCgroup: SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.510855385Z" level=info msg="Connect containerd service" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.511033893Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.533932Z INFO ExtHandler ExtHandler Found private key matching thumbprint 33D7A199EB5A54B287B160824E7C8404A806D41B | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.540250990Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.541337838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.543173819Z" level=info msg="Start subscribing containerd event" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.547925130Z" level=info msg="Start recovering state" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.548150240Z" level=info msg="Start event monitor" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.548250845Z" level=info msg="Start snapshots syncer" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.548436453Z" level=info msg="Start cni network conf syncer" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.548539457Z" level=info msg="Start streaming server" | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.549093582Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.549324792Z" level=info msg=serving... address=/run/containerd/containerd.sock | |
Jun 20 00:09:43 fv-az72-309 containerd[707]: time="2022-06-20T00:09:43.549677108Z" level=info msg="containerd successfully booted in 0.373259s" | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: Started containerd container runtime. | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: Starting Docker Application Container Engine... | |
Jun 20 00:09:43 fv-az72-309 provisioner[683]: Argument: --agentdirectory | |
Jun 20 00:09:43 fv-az72-309 provisioner[683]: Value: /home/runner/runners | |
Jun 20 00:09:43 fv-az72-309 provisioner[683]: Argument: --settings | |
Jun 20 00:09:43 fv-az72-309 provisioner[683]: Value: /opt/runner/provisioner/.settings | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.587516Z INFO ExtHandler ExtHandler Fetch goal state completed | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.611034Z INFO ExtHandler ExtHandler Distro: ubuntu-20.04; OSUtil: Ubuntu18OSUtil; AgentService: walinuxagent; Python: 3.8.10; systemd: True; LISDrivers: name: hv_vmbus | |
Jun 20 00:09:43 fv-az72-309 python3[853]: ; logrotate: logrotate 3.14.0; | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.617028Z INFO ExtHandler ExtHandler WALinuxAgent-2.7.1.0 running as process 853 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.625655Z INFO ExtHandler ExtHandler [CGI] systemd version: systemd 245 (245.4-4ubuntu3.17) | |
Jun 20 00:09:43 fv-az72-309 python3[853]: +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.640573Z INFO ExtHandler ExtHandler The CPU cgroup controller is mounted at /sys/fs/cgroup/cpu,cpuacct | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.640794Z INFO ExtHandler ExtHandler The memory cgroup controller is mounted at /sys/fs/cgroup/memory | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.646869Z INFO ExtHandler ExtHandler [CGI] cgroups v2 mounted at /sys/fs/cgroup/unified. Controllers: [] | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.665292Z INFO ExtHandler ExtHandler [CGI] CPUAccounting: yes | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.674660Z INFO ExtHandler ExtHandler [CGI] CPUQuota: 750ms | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.684464Z INFO ExtHandler ExtHandler [CGI] MemoryAccounting: yes | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.685211Z INFO ExtHandler ExtHandler [CGI] Agent CPU cgroup: /sys/fs/cgroup/cpu,cpuacct/azure.slice/walinuxagent.service | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.685570Z INFO ExtHandler ExtHandler [CGI] Ensuring the agent's CPUQuota is 75% | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.686616Z INFO ExtHandler ExtHandler Started tracking cgroup walinuxagent.service [/sys/fs/cgroup/cpu,cpuacct/azure.slice/walinuxagent.service] | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.686713Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: True | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.687622Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.697371Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up walinuxagent-network-setup.service | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.697704Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.708695Z INFO ExtHandler ExtHandler Unit file version matches with expected version: 1.2, not overwriting unit file | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.708868Z INFO ExtHandler ExtHandler Service: walinuxagent-network-setup.service already enabled. No change needed. | |
Jun 20 00:09:43 fv-az72-309 provisioner[683]: Starting service provider configuration | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.732779Z INFO ExtHandler ExtHandler Logs from the walinuxagent-network-setup.service since system boot: | |
Jun 20 00:09:43 fv-az72-309 python3[853]: -- Logs begin at Thu 2022-06-16 07:26:56 UTC, end at Mon 2022-06-20 00:09:43 UTC. -- | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:35 fv-az72-309 python3[217]: Setting up firewall for the WALinux Agent with args: {'dst_ip': '168.63.129.16', 'uid': '0', 'wait': '-w'} | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:35 fv-az72-309 python3[217]: Successfully set the firewall rules | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:35 fv-az72-309 systemd[1]: walinuxagent-network-setup.service: Succeeded. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:35 fv-az72-309 systemd[1]: Finished Setup network rules for WALinuxAgent. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:37 fv-az72-309 systemd[1]: Starting Setup network rules for WALinuxAgent... | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:37 fv-az72-309 python3[522]: Setting up firewall for the WALinux Agent with args: {'dst_ip': '168.63.129.16', 'uid': '0', 'wait': '-w'} | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:37 fv-az72-309 python3[522]: Successfully set the firewall rules | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:37 fv-az72-309 systemd[1]: walinuxagent-network-setup.service: Succeeded. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Jun 20 00:09:37 fv-az72-309 systemd[1]: Finished Setup network rules for WALinuxAgent. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.733627Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: Started The PHP 8.1 FastCGI Process Manager. | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: Started The PHP 8.0 FastCGI Process Manager. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.764577Z INFO ExtHandler ExtHandler Not setting the firewall rule to allow DNS TCP request to wireserver for a non root user since it already exists | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.765130Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [True]. All three conditions must be met: configuration enabled [True], cgroups enabled [True], python supported: [True] | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.766365Z INFO ExtHandler ExtHandler Starting env monitor service. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.767271Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.767379Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 | |
Jun 20 00:09:43 fv-az72-309 systemd[1]: Started The PHP 7.4 FastCGI Process Manager. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.767916Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.768113Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT | |
Jun 20 00:09:43 fv-az72-309 python3[853]: eth0 00000000 0100010A 0003 0 0 100 00000000 0 0 0 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: eth0 0000010A 00000000 0001 0 0 0 0000FFFF 0 0 0 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: eth0 10813FA8 0100010A 0007 0 0 100 FFFFFFFF 0 0 0 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: eth0 FEA9FEA9 0100010A 0007 0 0 100 FFFFFFFF 0 0 0 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.770476Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.772753Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.772934Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.773837Z INFO EnvHandler ExtHandler Configure routes | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.773972Z INFO EnvHandler ExtHandler Gateway:None | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.774036Z INFO EnvHandler ExtHandler Routes:None | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.782692Z INFO ExtHandler ExtHandler Start Extension Telemetry service. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.782350Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.784319Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.787686Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.795361Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.809158Z INFO MonitorHandler ExtHandler Network interfaces: | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Executing ['ip', '-a', '-o', 'link']: | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:13:05:24 brd ff:ff:ff:ff:ff:ff | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Executing ['ip', '-4', '-a', '-o', 'address']: | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2: eth0 inet 10.1.0.17/16 brd 10.1.255.255 scope global eth0\ valid_lft forever preferred_lft forever | |
Jun 20 00:09:43 fv-az72-309 python3[853]: Executing ['ip', '-6', '-a', '-o', 'address']: | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2: eth0 inet6 fe80::20d:3aff:fe13:524/64 scope link \ valid_lft forever preferred_lft forever | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.821550Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) | |
Jun 20 00:09:43 fv-az72-309 php8.1[918]: DIGEST-MD5 common mech free | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.880447Z INFO ExtHandler ExtHandler ProcessExtensionsInGoalState started [Incarnation: 1; Activity Id: 48fbd77a-0509-46ad-81a1-1394da8eb4b4; Correlation Id: 16378185-6322-4308-98a6-03c89673c297; GS Creation Time: 2022-06-20T00:07:55.901245Z] | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.884305Z INFO EnvHandler ExtHandler Set block dev timeout: sdb with timeout: 300 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.884476Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.933692Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Target handler state: enabled [incarnation 1] | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.937169Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] [Enable] current handler state is: notinstalled | |
Jun 20 00:09:43 fv-az72-309 python3[853]: 2022-06-20T00:09:43.937522Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Downloading extension package: https://umsa4s5cv5qvrkpxlmjq.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_2.1.6.zip | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.054361Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Unzipping extension package: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript__2.1.6.zip | |
Jun 20 00:09:44 fv-az72-309 php8.1[976]: DIGEST-MD5 common mech free | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.226239Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Initializing extension Microsoft.Azure.Extensions.CustomScript-2.1.6 | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.228256Z INFO ExtHandler ExtHandler [CGI] Created /lib/systemd/system/azure-vmextensions-Microsoft.Azure.Extensions.CustomScript_2.1.6.slice | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.229004Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Update settings file: 99.settings | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.229613Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Install extension [bin/custom-script-shim install] | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.229958Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Executing command: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim install with environment variables: {"AZURE_GUEST_AGENT_UNINSTALL_CMD_EXIT_CODE": "NOT_RUN", "AZURE_GUEST_AGENT_EXTENSION_PATH": "/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6", "AZURE_GUEST_AGENT_EXTENSION_VERSION": "2.1.6", "AZURE_GUEST_AGENT_WIRE_PROTOCOL_ADDRESS": "168.63.129.16", "ConfigSequenceNumber": "99", "AZURE_GUEST_AGENT_EXTENSION_SUPPORTED_FEATURES": "[{\"Key\": \"ExtensionTelemetryPipeline\", \"Value\": \"1.0\"}]"} | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.236342Z INFO ExtHandler ExtHandler Started extension in unit 'install_0b3bf6d6-56e1-417f-85ed-1149e224521c.scope' | |
Jun 20 00:09:44 fv-az72-309 python3[853]: 2022-06-20T00:09:44.237055Z INFO ExtHandler ExtHandler Started tracking cgroup Microsoft.Azure.Extensions.CustomScript-2.1.6 [/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.Azure.Extensions.CustomScript_2.1.6.slice] | |
Jun 20 00:09:44 fv-az72-309 systemd[1]: Created slice Slice for Azure VM Extensions. | |
Jun 20 00:09:44 fv-az72-309 systemd[1]: Created slice Slice for Azure VM extension Microsoft.Azure.Extensions.CustomScript-2.1.6. | |
Jun 20 00:09:44 fv-az72-309 systemd[1]: Started /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim install. | |
Jun 20 00:09:44 fv-az72-309 systemd[1]: install_0b3bf6d6-56e1-417f-85ed-1149e224521c.scope: Succeeded. | |
Jun 20 00:09:44 fv-az72-309 php8.0[994]: DIGEST-MD5 common mech free | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.525211296Z" level=info msg="Starting up" | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.527941717Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" | |
Jun 20 00:09:44 fv-az72-309 audit[1019]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=1019 comm="apparmor_parser" | |
Jun 20 00:09:44 fv-az72-309 kernel: kauditd_printk_skb: 22 callbacks suppressed | |
Jun 20 00:09:44 fv-az72-309 kernel: audit: type=1400 audit(1655683784.674:33): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=1019 comm="apparmor_parser" | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.699789443Z" level=info msg="parsed scheme: \"unix\"" module=grpc | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.700101757Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.700257264Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.700373469Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc | |
Jun 20 00:09:44 fv-az72-309 php8.0[1026]: DIGEST-MD5 common mech free | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.753493726Z" level=info msg="parsed scheme: \"unix\"" module=grpc | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.753533528Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.753555729Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.753571029Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc | |
Jun 20 00:09:44 fv-az72-309 systemd[1]: var-lib-docker-overlay2-check\x2doverlayfs\x2dsupport4125447870-merged.mount: Succeeded. | |
Jun 20 00:09:44 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:44.902914356Z" level=info msg="[graphdriver] using prior storage driver: overlay2" | |
Jun 20 00:09:44 fv-az72-309 php8.0[1040]: DIGEST-MD5 common mech free | |
Jun 20 00:09:45 fv-az72-309 provisioner[683]: Done configuring service provider | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.139899372Z" level=warning msg="Your kernel does not support CPU realtime scheduler" | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.140218986Z" level=warning msg="Your kernel does not support cgroup blkio weight" | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.140353192Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.141350037Z" level=info msg="Loading containers: start." | |
Jun 20 00:09:45 fv-az72-309 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. | |
Jun 20 00:09:45 fv-az72-309 kernel: Bridge firewalling registered | |
Jun 20 00:09:45 fv-az72-309 systemd[1]: man-db.service: Succeeded. | |
Jun 20 00:09:45 fv-az72-309 systemd[1]: Finished Daily man-db regeneration. | |
Jun 20 00:09:45 fv-az72-309 php7.4[1053]: DIGEST-MD5 common mech free | |
Jun 20 00:09:45 fv-az72-309 php7.4[1076]: DIGEST-MD5 common mech free | |
Jun 20 00:09:45 fv-az72-309 kernel: Initializing XFRM netlink socket | |
Jun 20 00:09:45 fv-az72-309 systemd-udevd[813]: Using default interface naming scheme 'v245'. | |
Jun 20 00:09:45 fv-az72-309 systemd-udevd[813]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:09:45 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '3' we don't know about, ignoring. | |
Jun 20 00:09:45 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '3' we don't know about, ignoring. | |
Jun 20 00:09:45 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '3' we don't know about, ignoring. | |
Jun 20 00:09:45 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '3' we don't know about, ignoring. | |
Jun 20 00:09:45 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 3 seen, reloading interface list | |
Jun 20 00:09:45 fv-az72-309 php7.4[1121]: DIGEST-MD5 common mech free | |
Jun 20 00:09:45 fv-az72-309 systemd[1]: phpsessionclean.service: Succeeded. | |
Jun 20 00:09:45 fv-az72-309 systemd[1]: Finished Clean php session files. | |
Jun 20 00:09:45 fv-az72-309 systemd-networkd[532]: docker0: Link UP | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.733696421Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.846060307Z" level=info msg="Loading containers: done." | |
Jun 20 00:09:45 fv-az72-309 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck317930894-merged.mount: Succeeded. | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.959672249Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.959898559Z" level=info msg="Docker daemon" commit=f756502055d2e36a84f2068e6620bea5ecf09058 graphdriver(s)=overlay2 version=20.10.16+azure-2 | |
Jun 20 00:09:45 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:45.960376280Z" level=info msg="Daemon has completed initialization" | |
Jun 20 00:09:45 fv-az72-309 systemd[1]: Started Docker Application Container Engine. | |
Jun 20 00:09:46 fv-az72-309 dockerd[913]: time="2022-06-20T00:09:46.009683668Z" level=info msg="API listen on /run/docker.sock" | |
Jun 20 00:09:46 fv-az72-309 python3[853]: 2022-06-20T00:09:46.238666Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Command: bin/custom-script-shim install | |
Jun 20 00:09:46 fv-az72-309 python3[853]: [stdout] | |
Jun 20 00:09:46 fv-az72-309 python3[853]: + /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-extension install | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 event=start | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 status="not reported for operation (by design)" | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 event="migrate to mrseq" error="Can't find out seqnum from /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/status, not enough files" | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 event="created data dir" path=/var/lib/waagent/custom-script | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 event=installed | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 status="not reported for operation (by design)" | |
Jun 20 00:09:46 fv-az72-309 python3[853]: time=2022-06-20T00:09:44Z version=v2.1.6/git@fc181d8-dirty operation=install seq=99 event=end | |
Jun 20 00:09:46 fv-az72-309 python3[853]: [stderr] | |
Jun 20 00:09:46 fv-az72-309 python3[853]: Running scope as unit: install_0b3bf6d6-56e1-417f-85ed-1149e224521c.scope | |
Jun 20 00:09:46 fv-az72-309 python3[853]: 2022-06-20T00:09:46.239839Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Requested extension state: enabled | |
Jun 20 00:09:46 fv-az72-309 python3[853]: 2022-06-20T00:09:46.240097Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Enable extension: [bin/custom-script-shim enable] | |
Jun 20 00:09:46 fv-az72-309 python3[853]: 2022-06-20T00:09:46.240389Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Executing command: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim enable with environment variables: {"AZURE_GUEST_AGENT_UNINSTALL_CMD_EXIT_CODE": "NOT_RUN", "AZURE_GUEST_AGENT_EXTENSION_PATH": "/var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6", "AZURE_GUEST_AGENT_EXTENSION_VERSION": "2.1.6", "AZURE_GUEST_AGENT_WIRE_PROTOCOL_ADDRESS": "168.63.129.16", "ConfigSequenceNumber": "99", "AZURE_GUEST_AGENT_EXTENSION_SUPPORTED_FEATURES": "[{\"Key\": \"ExtensionTelemetryPipeline\", \"Value\": \"1.0\"}]"} | |
Jun 20 00:09:46 fv-az72-309 python3[853]: 2022-06-20T00:09:46.244060Z INFO ExtHandler ExtHandler Started extension in unit 'enable_a078213d-e1c9-4073-aa18-bdb3cc076462.scope' | |
Jun 20 00:09:46 fv-az72-309 systemd[1]: Started /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-shim enable. | |
Jun 20 00:09:46 fv-az72-309 systemd[1]: enable_a078213d-e1c9-4073-aa18-bdb3cc076462.scope: Succeeded. | |
Jun 20 00:09:46 fv-az72-309 lxd.activate[689]: => Starting LXD activation | |
Jun 20 00:09:46 fv-az72-309 lxd.activate[689]: ==> Loading snap configuration | |
Jun 20 00:09:46 fv-az72-309 lxd.activate[689]: ==> Checking for socket activation support | |
Jun 20 00:09:46 fv-az72-309 systemd[1]: dmesg.service: Succeeded. | |
Jun 20 00:09:46 fv-az72-309 snapd[691]: AppArmor status: apparmor is enabled and all features are available | |
Jun 20 00:09:46 fv-az72-309 snapd[691]: overlord.go:263: Acquiring state lock file | |
Jun 20 00:09:46 fv-az72-309 snapd[691]: overlord.go:268: Acquired state lock file | |
Jun 20 00:09:46 fv-az72-309 snapd[691]: daemon.go:247: started snapd/2.56 (series 16; classic) ubuntu/20.04 (amd64) linux/5.13.0-1029-azure. | |
Jun 20 00:09:46 fv-az72-309 kernel: loop3: detected capacity change from 0 to 8 | |
Jun 20 00:09:46 fv-az72-309 systemd[1]: tmp-syscheck\x2dmountpoint\x2d169127102.mount: Succeeded. | |
Jun 20 00:09:47 fv-az72-309 snapd[691]: daemon.go:340: adjusting startup timeout by 45s (pessimistic estimate of 30s plus 5s per snap) | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Started Snap Daemon. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Starting Wait until snapd is fully seeded... | |
Jun 20 00:09:47 fv-az72-309 dbus-daemon[658]: [system] Activating via systemd: service name='org.freedesktop.timedate1' unit='dbus-org.freedesktop.timedate1.service' requested by ':1.11' (uid=0 pid=691 comm="/usr/lib/snapd/snapd " label="unconfined") | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Starting Time & Date Service... | |
Jun 20 00:09:47 fv-az72-309 dbus-daemon[658]: [system] Successfully activated service 'org.freedesktop.timedate1' | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Started Time & Date Service. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Finished Wait until snapd is fully seeded. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Starting Apply the settings specified in cloud-config... | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Condition check resulted in Auto import assertions from block devices being skipped. | |
Jun 20 00:09:47 fv-az72-309 lxd.activate[689]: ==> Setting LXD socket ownership | |
Jun 20 00:09:47 fv-az72-309 lxd.activate[689]: ==> LXD never started on this system, no need to start it now | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: snap.lxd.activate.service: Succeeded. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Finished Service for snap application lxd.activate. | |
Jun 20 00:09:47 fv-az72-309 cloud-init[1367]: Cloud-init v. 22.2-0ubuntu1~20.04.2 running 'modules:config' at Mon, 20 Jun 2022 00:09:47 +0000. Up 15.74 seconds. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Finished Apply the settings specified in cloud-config. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Starting Write warning to Azure ephemeral disk... | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Finished Write warning to Azure ephemeral disk. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Reached target Multi-User System. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Reached target Graphical Interface. | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Starting Execute cloud user/final scripts... | |
Jun 20 00:09:47 fv-az72-309 systemd[1]: Starting Update UTMP about System Runlevel Changes... | |
Jun 20 00:09:48 fv-az72-309 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded. | |
Jun 20 00:09:48 fv-az72-309 systemd[1]: Finished Update UTMP about System Runlevel Changes. | |
Jun 20 00:09:48 fv-az72-309 python3[853]: 2022-06-20T00:09:48.247268Z INFO ExtHandler [Microsoft.Azure.Extensions.CustomScript-2.1.6] Command: bin/custom-script-shim enable | |
Jun 20 00:09:48 fv-az72-309 python3[853]: [stdout] | |
Jun 20 00:09:48 fv-az72-309 python3[853]: Writing a placeholder status file indicating progress before forking: /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/status/99.status | |
Jun 20 00:09:48 fv-az72-309 python3[853]: + nohup /var/lib/waagent/Microsoft.Azure.Extensions.CustomScript-2.1.6/bin/custom-script-extension enable | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event=start | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event=pre-check | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="comparing seqnum" path=mrseq | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="seqnum saved" path=mrseq | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="reading configuration" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="read configuration" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="validating json schema" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="json schema valid" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="parsing configuration json" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="parsed configuration json" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="validating configuration logically" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="validated configuration" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="creating output directory" path=/var/lib/waagent/custom-script/download/99 | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="created output directory" | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 files=0 | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="executing command" output=/var/lib/waagent/custom-script/download/99 | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="executing protected commandToExecute" output=/var/lib/waagent/custom-script/download/99 | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event="executed command" output=/var/lib/waagent/custom-script/download/99 | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event=enabled | |
Jun 20 00:09:48 fv-az72-309 python3[853]: time=2022-06-20T00:09:46Z version=v2.1.6/git@fc181d8-dirty operation=enable seq=99 event=end | |
Jun 20 00:09:48 fv-az72-309 python3[853]: [stderr] | |
Jun 20 00:09:48 fv-az72-309 python3[853]: Running scope as unit: enable_a078213d-e1c9-4073-aa18-bdb3cc076462.scope | |
Jun 20 00:09:48 fv-az72-309 python3[853]: 2022-06-20T00:09:48.249383Z INFO ExtHandler ExtHandler ProcessExtensionsInGoalState completed [Incarnation: 1; 4368 ms; Activity Id: 48fbd77a-0509-46ad-81a1-1394da8eb4b4; Correlation Id: 16378185-6322-4308-98a6-03c89673c297; GS Creation Time: 2022-06-20T00:07:55.901245Z] | |
Jun 20 00:09:48 fv-az72-309 python3[853]: 2022-06-20T00:09:48.279861Z INFO ExtHandler ExtHandler Extension status: [('Microsoft.Azure.Extensions.CustomScript', 'success')] | |
Jun 20 00:09:48 fv-az72-309 python3[853]: 2022-06-20T00:09:48.280279Z INFO ExtHandler ExtHandler All extensions in the goal state have reached a terminal state: [('Microsoft.Azure.Extensions.CustomScript', 'success')] | |
Jun 20 00:09:48 fv-az72-309 python3[853]: 2022-06-20T00:09:48.280630Z INFO ExtHandler ExtHandler Looking for existing remote access users. | |
Jun 20 00:09:48 fv-az72-309 python3[853]: 2022-06-20T00:09:48.291931Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.7.1.0 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D409D817-B676-42D6-87B2-4FC0E04C82EB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1395]: ############################################################# | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1396]: -----BEGIN SSH HOST KEY FINGERPRINTS----- | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1398]: 1024 SHA256:XGcH1MwJBdIF6Dewo/X0osa6enb06nIxLFk5/q6UAm8 root@fv-az72-309 (DSA) | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1400]: 256 SHA256:6mlnK30thyw5xPDk04GqP6xQuDIEULkyXUpZlLeH/us root@fv-az72-309 (ECDSA) | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1402]: 256 SHA256:CkHtFnUi6AlzMN/rV+bhb10BzsSd4ft4SZpQoIuutqU root@fv-az72-309 (ED25519) | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1404]: 3072 SHA256:xgHytXr9AoLrM03Ufz16C0Uv9NChaKGn5zsbtoCu2Gg root@fv-az72-309 (RSA) | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1405]: -----END SSH HOST KEY FINGERPRINTS----- | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1406]: ############################################################# | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1393]: Cloud-init v. 22.2-0ubuntu1~20.04.2 running 'modules:final' at Mon, 20 Jun 2022 00:09:48 +0000. Up 16.62 seconds. | |
Jun 20 00:09:48 fv-az72-309 cloud-init[1393]: Cloud-init v. 22.2-0ubuntu1~20.04.2 finished at Mon, 20 Jun 2022 00:09:48 +0000. Datasource DataSourceAzure [seed=/var/lib/waagent]. Up 16.83 seconds | |
Jun 20 00:09:48 fv-az72-309 systemd[1]: Finished Execute cloud user/final scripts. | |
Jun 20 00:09:48 fv-az72-309 systemd[1]: Reached target Cloud-init target. | |
Jun 20 00:09:48 fv-az72-309 systemd[1]: Startup finished in 2.377s (kernel) + 14.528s (userspace) = 16.905s. | |
Jun 20 00:10:03 fv-az72-309 provisioner[1432]: √ Connected to GitHub | |
Jun 20 00:10:03 fv-az72-309 provisioner[1432]: Current runner version: '2.293.0' | |
Jun 20 00:10:03 fv-az72-309 provisioner[1432]: 2022-06-20 00:10:03Z: Listening for Jobs | |
Jun 20 00:10:04 fv-az72-309 provisioner[1432]: 2022-06-20 00:10:04Z: Running job: py38-ansible_4 | |
Jun 20 00:10:05 fv-az72-309 chronyd[666]: Selected source PHC0 | |
Jun 20 00:10:06 fv-az72-309 systemd[1]: systemd-fsckd.service: Succeeded. | |
Jun 20 00:10:08 fv-az72-309 sudo[1563]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/apt-get update | |
Jun 20 00:10:08 fv-az72-309 sudo[1563]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:10:10 fv-az72-309 dbus-daemon[658]: [system] Activating via systemd: service name='org.freedesktop.PackageKit' unit='packagekit.service' requested by ':1.13' (uid=0 pid=2142 comm="/usr/bin/gdbus call --system --dest org.freedeskto" label="unconfined") | |
Jun 20 00:10:10 fv-az72-309 systemd[1]: Starting PackageKit Daemon... | |
Jun 20 00:10:10 fv-az72-309 PackageKit[2145]: daemon start | |
Jun 20 00:10:10 fv-az72-309 dbus-daemon[658]: [system] Successfully activated service 'org.freedesktop.PackageKit' | |
Jun 20 00:10:10 fv-az72-309 systemd[1]: Started PackageKit Daemon. | |
Jun 20 00:10:13 fv-az72-309 systemd[1]: Starting Online ext4 Metadata Check for All Filesystems... | |
Jun 20 00:10:13 fv-az72-309 systemd[1]: e2scrub_all.service: Succeeded. | |
Jun 20 00:10:13 fv-az72-309 systemd[1]: Finished Online ext4 Metadata Check for All Filesystems. | |
Jun 20 00:10:16 fv-az72-309 sudo[1563]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:10:16 fv-az72-309 sudo[2229]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/apt-key add - | |
Jun 20 00:10:16 fv-az72-309 sudo[2229]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:10:17 fv-az72-309 sudo[2229]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:10:17 fv-az72-309 sudo[2695]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/add-apt-repository deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable | |
Jun 20 00:10:17 fv-az72-309 sudo[2695]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:10:17 fv-az72-309 systemd[1]: systemd-timedated.service: Succeeded. | |
Jun 20 00:10:22 fv-az72-309 sudo[2695]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:10:22 fv-az72-309 sudo[3435]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/apt-get update | |
Jun 20 00:10:22 fv-az72-309 sudo[3435]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:10:23 fv-az72-309 kernel: hv_balloon: Max. dynamic memory size: 7168 MB | |
Jun 20 00:10:24 fv-az72-309 sudo[3435]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:10:24 fv-az72-309 sudo[2227]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/apt-get -y -o Dpkg::Options::=--force-confnew install docker-ce | |
Jun 20 00:10:24 fv-az72-309 sudo[2227]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Stopping Docker Application Container Engine... | |
Jun 20 00:10:35 fv-az72-309 dockerd[913]: time="2022-06-20T00:10:35.116313511Z" level=info msg="Processing signal 'terminated'" | |
Jun 20 00:10:35 fv-az72-309 dockerd[913]: time="2022-06-20T00:10:35.117809018Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby | |
Jun 20 00:10:35 fv-az72-309 dockerd[913]: time="2022-06-20T00:10:35.117972530Z" level=info msg="Daemon shutdown complete" | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: docker.service: Succeeded. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Stopped Docker Application Container Engine. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: docker.socket: Succeeded. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Closed Docker Socket for the API. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Stopping containerd container runtime... | |
Jun 20 00:10:35 fv-az72-309 containerd[707]: time="2022-06-20T00:10:35.864411488Z" level=info msg="Stop CRI service" | |
Jun 20 00:10:35 fv-az72-309 containerd[707]: time="2022-06-20T00:10:35.875503484Z" level=info msg="Stop CRI service" | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: containerd.service: Succeeded. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Stopped containerd container runtime. | |
Jun 20 00:10:35 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:36 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: Starting containerd container runtime... | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header" | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.796723350Z" level=info msg="starting containerd" revision=10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 version=1.6.6 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.814972692Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.815320319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817331178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817549095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817575097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817589598Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817599999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817623101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817757012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817918524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817942026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817960228Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.817971528Z" level=info msg="metadata content store policy set" policy=shared | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818061336Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818078537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818092038Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818118740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818133741Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818148042Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818160643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818184145Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818198046Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818212247Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818225048Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818241550Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818277953Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818312855Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818627680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818669784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818684285Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818727188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818743789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818756790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818768791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818781592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818795094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818820496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818831896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818844797Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818880700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818893401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818908602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818919303Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818933304Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818944505Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.818966007Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.819166623Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.819201426Z" level=info msg=serving... address=/run/containerd/containerd.sock | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: Started containerd container runtime. | |
Jun 20 00:10:47 fv-az72-309 containerd[4465]: time="2022-06-20T00:10:47.820354117Z" level=info msg="containerd successfully booted in 0.024895s" | |
Jun 20 00:10:47 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:48 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: Starting Docker Socket for the API. | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: Listening on Docker Socket for the API. | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: Starting Docker Application Container Engine... | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.420511696Z" level=info msg="Starting up" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.421066140Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.422105122Z" level=info msg="parsed scheme: \"unix\"" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.422132724Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.422158026Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.422169027Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.425101059Z" level=info msg="parsed scheme: \"unix\"" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.425124860Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.425160763Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.425171164Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: var-lib-docker-overlay2-check\x2doverlayfs\x2dsupport3991811295-merged.mount: Succeeded. | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.592773901Z" level=info msg="[graphdriver] using prior storage driver: overlay2" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.602167743Z" level=warning msg="Your kernel does not support CPU realtime scheduler" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.602201646Z" level=warning msg="Your kernel does not support cgroup blkio weight" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.602208646Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.602353958Z" level=info msg="Loading containers: start." | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.686397796Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.720531291Z" level=info msg="Loading containers: done." | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1251787295-merged.mount: Succeeded. | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.739961926Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.740165442Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.740214946Z" level=info msg="Daemon has completed initialization" | |
Jun 20 00:10:49 fv-az72-309 systemd[1]: Started Docker Application Container Engine. | |
Jun 20 00:10:49 fv-az72-309 dockerd[4625]: time="2022-06-20T00:10:49.756247612Z" level=info msg="API listen on /run/docker.sock" | |
Jun 20 00:10:55 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:10:55 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:10:55 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:10:55 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:10:57 fv-az72-309 sudo[2227]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:10:58 fv-az72-309 sudo[5792]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/install kubectl /usr/local/bin | |
Jun 20 00:10:58 fv-az72-309 sudo[5792]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:10:58 fv-az72-309 sudo[5792]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:11:00 fv-az72-309 sudo[5801]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/install minikube-linux-amd64 /usr/local/bin/minikube | |
Jun 20 00:11:00 fv-az72-309 sudo[5801]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:11:00 fv-az72-309 sudo[5801]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:11:00 fv-az72-309 sudo[5855]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/echo -n | |
Jun 20 00:11:00 fv-az72-309 sudo[5855]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:11:00 fv-az72-309 sudo[5855]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:11:00 fv-az72-309 sudo[5857]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/podman version --format {{.Version}} | |
Jun 20 00:11:00 fv-az72-309 sudo[5857]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:11:05 fv-az72-309 podman[5858]: 2022-06-20 00:11:05.639907895 +0000 UTC m=+4.331075114 system refresh | |
Jun 20 00:11:05 fv-az72-309 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. | |
Jun 20 00:11:05 fv-az72-309 sudo[5857]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:11:16 fv-az72-309 systemd[1]: var-lib-docker-overlay2-6c321a59084cff0062aa517d0b7c15b48b7ae5954a32f720afef9c3d9fac2981-merged.mount: Succeeded. | |
Jun 20 00:11:18 fv-az72-309 systemd[1]: var-lib-docker-overlay2-e97c88604001e5d750978a5f1491da128224cfbc0159d279014fd7a0c4fef3aa-merged.mount: Succeeded. | |
Jun 20 00:11:18 fv-az72-309 systemd[1]: var-lib-docker-overlay2-939c81a56812989ae0c022f9b9e4178df39ad3eba5204a23d01089957cf3d9df-merged.mount: Succeeded. | |
Jun 20 00:11:18 fv-az72-309 systemd[1]: var-lib-docker-overlay2-701f55828eb21355bdea257a5a7834ee61286cb8b6682b45cae571479c161431-merged.mount: Succeeded. | |
Jun 20 00:11:18 fv-az72-309 systemd[1]: var-lib-docker-overlay2-a6d415c4f816fe6aebb48dadcda8cf5c74a58cc1d79d327bb9476a50bf8f22ad-merged.mount: Succeeded. | |
Jun 20 00:11:18 fv-az72-309 systemd[1]: var-lib-docker-overlay2-eadead2972bf101e5c1a2f5c42210bce6d776d0fdb849bc18d7d7da54255a41b-merged.mount: Succeeded. | |
Jun 20 00:11:18 fv-az72-309 systemd[1]: var-lib-docker-overlay2-231e41ff55fdeb4fdff88135e4a347d309275fdaede65728ad2516ae746b4a27-merged.mount: Succeeded. | |
Jun 20 00:11:19 fv-az72-309 systemd[1]: var-lib-docker-overlay2-da081718bdf45f5d15924e71da9a20ff0196d55ed0d3a1cbe6f1c59a4ad34bdc-merged.mount: Succeeded. | |
Jun 20 00:11:19 fv-az72-309 systemd[1]: var-lib-docker-overlay2-d9ed3e8393decd85b3a4c6a80650266cf6dc79425a38460117d7beef209c7033-merged.mount: Succeeded. | |
Jun 20 00:11:19 fv-az72-309 systemd[1]: var-lib-docker-overlay2-bf06a58cbabae6bd25d1718d74800b847146023670afc5a357369262de6563e0-merged.mount: Succeeded. | |
Jun 20 00:11:20 fv-az72-309 systemd[1]: var-lib-docker-overlay2-c78d379f8f717c9f54d82c2aa6aed83318867e72fdab84871d96bcd0a45e9390-merged.mount: Succeeded. | |
Jun 20 00:11:20 fv-az72-309 systemd[1]: var-lib-docker-overlay2-e596a31f802c317d50325fc3310b26e28d296e9453ebf05ea002eff611b70fac-merged.mount: Succeeded. | |
Jun 20 00:11:21 fv-az72-309 systemd[1]: var-lib-docker-overlay2-b90c0d51a1b98be92f1287c573dec35947ecad93feeea17df0cce10e710d9611-merged.mount: Succeeded. | |
Jun 20 00:11:24 fv-az72-309 systemd[1]: var-lib-docker-overlay2-9095418bb28b6b95b432d5399a4f225f5dd45ba43665337140607c45b952c71c-merged.mount: Succeeded. | |
Jun 20 00:11:26 fv-az72-309 systemd[1]: var-lib-docker-overlay2-a7d717a8db54f8dfdb56022e705ae2c1e6f05a680ae2429ec72371bfa0947da2-merged.mount: Succeeded. | |
Jun 20 00:11:26 fv-az72-309 systemd[1]: var-lib-docker-overlay2-69bd8e93815b6fe6189e90f5faafe1b2da213d92a8094703c4dd3e7c4fafcfd0-merged.mount: Succeeded. | |
Jun 20 00:11:26 fv-az72-309 systemd[1]: var-lib-docker-overlay2-9e91112175779bd56d8187989420c6e42097e2f0112ccb444814ede5c11f9c02-merged.mount: Succeeded. | |
Jun 20 00:11:27 fv-az72-309 systemd[1]: var-lib-docker-overlay2-5e73ccc9c742e427f132a279a66ce342e1aa1b74a06448970fd4addd7231bbbb-merged.mount: Succeeded. | |
Jun 20 00:11:29 fv-az72-309 systemd[1]: var-lib-docker-overlay2-c1b9946c54237f328340c6103a5a14b22e30a4476905c0f1101dd14bd34cf2bf-merged.mount: Succeeded. | |
Jun 20 00:11:30 fv-az72-309 systemd[1]: var-lib-docker-overlay2-e01d78150b1ef4cd2658a853c74c50b538b2aa58f5f7a2c8dbcb7cf1fabba111-merged.mount: Succeeded. | |
Jun 20 00:11:31 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27323b6802f90d3cb3e3787d540d1e2523e61fa9734fc84c0a8b4ddca2b4b24f-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-cd56f4e681eaca678eaed0d0264377b89c8e18b170563f9ace857ddfc149fcf1-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-55eb779a7ffb4780331b4a685ee90b85366c0b40d440e5eb5140e62419957dad-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-cb3c3a07917ad57f1ae7ed4818c6f62ce6d05d41a93d27f6d61f3df338dd3fd4-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-e07b79b8c2d6eb35285c01badba514e2b9e6e9a713f8c1a6aa16ae4e092676b7-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-f7f35747b342c73deefc5b36265ac1b816be16990ba9a573025627e734fd04a2-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-29aa876d126e1a3b2dcc87cabc9f2ec9139946d6dd1c21c37b0ab7c1b8d09bd1-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-d41d1a5a7740083f5b8013b2512160282a4c4dbb550d8ac9d8a9770684095f46-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-8bd0ccec22895d82675b97cb4ef1579f985e88edd3a8bbe45bc6c741c61535cd-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-a98382247c8d1a0e8732ccc7f4166fec1ce6f32ffb1cdc9c084e8e83b452f5b2-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-64dbe618f70ad0b5fe1b3f4ac82c405c50e20966745d4d47c8ee08f4017e8ad9-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-ee7d050153ac0527212321510ca30bb925b626d30cdb8a7a1023f3671c4de2e2-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-9f9281029c0d0203621df207e6e7c833e8d51d56bc622dcb98b7468e76a3634b-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-316a1b8d8c82b4aecfb6dabafe3f9c1b9843fc8b9638ccfe320962bbdcb6175b-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-f36743a9ce57dd9940ca50b10bd2ee08a0a7fb38ef9b73508df7b2fc8e84d1dc-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-3ed3613e2ba316942d24faeca9b157a20297652ed61f3d0bf86cc0eb8fb64da8-merged.mount: Succeeded. | |
Jun 20 00:11:32 fv-az72-309 systemd[1]: var-lib-docker-overlay2-730175aadb1969078d811fcb5011a8b7390b6a6c30b6d127148694f4368bdf10-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-3b2bee075d790f9351547c0180f73dfc6b0111299b26abe53b09dda6dd4d7277-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-afcc22a6f583683733aaf876a399a3f29506768725cad8c3efea2ce479349e8f-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-4fa2c6f702477b3992762a5b58303381268ba5b93cd8778df3f48f6c8de91eb6-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-0f8a0319a399c0b3677db977f737db88127a034ca626f536a48f6e3482a45960-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-d4194db259036492eb1b65366c0214ee4cf3f3531c528d0c8207ed6b63435ae0-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-6b7009d4226b6c3f16c2b6c871514f82fe4cadc1442a53b24399db6300fe548c-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd[1]: var-lib-docker-overlay2-e9cc4f7e92378ab897810d8312cfe661382226bd5e03aa4b82fef3cc32c92422-merged.mount: Succeeded. | |
Jun 20 00:11:33 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '4' we don't know about, ignoring. | |
Jun 20 00:11:33 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '4' we don't know about, ignoring. | |
Jun 20 00:11:33 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '4' we don't know about, ignoring. | |
Jun 20 00:11:33 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '4' we don't know about, ignoring. | |
Jun 20 00:11:33 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 4 seen, reloading interface list | |
Jun 20 00:11:33 fv-az72-309 systemd-udevd[6451]: Using default interface naming scheme 'v245'. | |
Jun 20 00:11:33 fv-az72-309 systemd-udevd[6451]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:33 fv-az72-309 systemd-networkd[532]: br-d1e9d479f443: Link UP | |
Jun 20 00:11:33 fv-az72-309 dockerd[4625]: time="2022-06-20T00:11:33.990813528Z" level=warning msg="reference for unknown type: " digest="sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2" remote="gcr.io/k8s-minikube/kicbase@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2" | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: var-lib-docker-overlay2-3ca3a993ed9e2d45f8bfa6b6ea429467eb15347912f720aee92e4256fbf6e461\x2dinit-merged.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: var-lib-docker-overlay2-3ca3a993ed9e2d45f8bfa6b6ea429467eb15347912f720aee92e4256fbf6e461-merged.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6464]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered blocking state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 kernel: device vethf1b0b28 entered promiscuous mode | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6451]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6451]: vethf1b0b28: Could not generate persistent MAC: No data available | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6464]: Using default interface naming scheme 'v245'. | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6464]: veth63d8682: Could not generate persistent MAC: No data available | |
Jun 20 00:11:34 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 5 seen, reloading interface list | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: vethf1b0b28: Link UP | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered blocking state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered forwarding state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.492887573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.492956079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.492969780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.493149594Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/1a2717258ff0923e177c915368c689d0457b73667aed09174aa9b155d7804bb5 pid=6505 runtime=io.containerd.runc.v2 | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: run-docker-runtime\x2drunc-moby-1a2717258ff0923e177c915368c689d0457b73667aed09174aa9b155d7804bb5-runc.074VXr.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 kernel: eth0: renamed from veth63d8682 | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: vethf1b0b28: Gained carrier | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: docker0: Gained carrier | |
Jun 20 00:11:34 fv-az72-309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf1b0b28: link becomes ready | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered blocking state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered forwarding state | |
Jun 20 00:11:34 fv-az72-309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready | |
Jun 20 00:11:34 fv-az72-309 dockerd[4625]: time="2022-06-20T00:11:34.722254430Z" level=info msg="ignoring event" container=1a2717258ff0923e177c915368c689d0457b73667aed09174aa9b155d7804bb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.722628660Z" level=info msg="shim disconnected" id=1a2717258ff0923e177c915368c689d0457b73667aed09174aa9b155d7804bb5 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.722683364Z" level=warning msg="cleaning up after shim disconnected" id=1a2717258ff0923e177c915368c689d0457b73667aed09174aa9b155d7804bb5 namespace=moby | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.722697665Z" level=info msg="cleaning up dead shim" | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.733387312Z" level=warning msg="cleanup warnings time=\"2022-06-20T00:11:34Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6566 runtime=io.containerd.runc.v2\n" | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: vethf1b0b28: Lost carrier | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 kernel: veth63d8682: renamed from eth0 | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6496]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:34 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 5 seen, reloading interface list | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6496]: Using default interface naming scheme 'v245'. | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: vethf1b0b28: Link DOWN | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 kernel: device vethf1b0b28 left promiscuous mode | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethf1b0b28) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '6' we don't know about, ignoring. | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '6' we don't know about, ignoring. | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: veth63d8682: Failed to wait for the interface to be initialized: No such device | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6496]: veth63d8682: Failed to get link config: No such device | |
Jun 20 00:11:34 fv-az72-309 networkd-dispatcher[6584]: Interface "vethf1b0b28" not found. | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: networkd-dispatcher.service: Got notification message from PID 6584, but reception only permitted for main PID 674 | |
Jun 20 00:11:34 fv-az72-309 networkd-dispatcher[674]: ERROR:Failed to get interface "vethf1b0b28" status: Command '['/usr/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'vethf1b0b28']' returned non-zero exit status 1. | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: docker0: Lost carrier | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: run-docker-netns-a6977305fb52.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: var-lib-docker-overlay2-3ca3a993ed9e2d45f8bfa6b6ea429467eb15347912f720aee92e4256fbf6e461-merged.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: var-lib-docker-overlay2-c5545e751edee2d4c5f1da7b235db9235e2b92dbdc11130852f9deee263df41b\x2dinit-merged.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 systemd[1]: var-lib-docker-overlay2-c5545e751edee2d4c5f1da7b235db9235e2b92dbdc11130852f9deee263df41b-merged.mount: Succeeded. | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered blocking state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 kernel: device vethfb268c1 entered promiscuous mode | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered blocking state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered forwarding state | |
Jun 20 00:11:34 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered disabled state | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6496]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6496]: vethecca8aa: Could not generate persistent MAC: No data available | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6451]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:34 fv-az72-309 systemd-udevd[6451]: vethfb268c1: Could not generate persistent MAC: No data available | |
Jun 20 00:11:34 fv-az72-309 systemd-networkd[532]: vethfb268c1: Link UP | |
Jun 20 00:11:34 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 7 seen, reloading interface list | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.955389186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.955465192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.955478493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 | |
Jun 20 00:11:34 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:34.955850122Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/f665726575a2f573f76c241a1713ecbad3279d6357c95f1a74ce0ad837542379 pid=6615 runtime=io.containerd.runc.v2 | |
Jun 20 00:11:35 fv-az72-309 kernel: eth0: renamed from vethecca8aa | |
Jun 20 00:11:35 fv-az72-309 systemd-networkd[532]: vethfb268c1: Gained carrier | |
Jun 20 00:11:35 fv-az72-309 systemd-networkd[532]: docker0: Gained carrier | |
Jun 20 00:11:35 fv-az72-309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfb268c1: link becomes ready | |
Jun 20 00:11:35 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered blocking state | |
Jun 20 00:11:35 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered forwarding state | |
Jun 20 00:11:36 fv-az72-309 systemd-networkd[532]: docker0: Gained IPv6LL | |
Jun 20 00:11:36 fv-az72-309 systemd-networkd[532]: vethfb268c1: Gained IPv6LL | |
Jun 20 00:11:38 fv-az72-309 dockerd[4625]: time="2022-06-20T00:11:38.494853075Z" level=info msg="ignoring event" container=f665726575a2f573f76c241a1713ecbad3279d6357c95f1a74ce0ad837542379 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" | |
Jun 20 00:11:38 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:38.495523028Z" level=info msg="shim disconnected" id=f665726575a2f573f76c241a1713ecbad3279d6357c95f1a74ce0ad837542379 | |
Jun 20 00:11:38 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:38.495579832Z" level=warning msg="cleaning up after shim disconnected" id=f665726575a2f573f76c241a1713ecbad3279d6357c95f1a74ce0ad837542379 namespace=moby | |
Jun 20 00:11:38 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:38.495589433Z" level=info msg="cleaning up dead shim" | |
Jun 20 00:11:38 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:38.505608326Z" level=warning msg="cleanup warnings time=\"2022-06-20T00:11:38Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6674 runtime=io.containerd.runc.v2\n" | |
Jun 20 00:11:38 fv-az72-309 systemd-networkd[532]: vethfb268c1: Lost carrier | |
Jun 20 00:11:38 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered disabled state | |
Jun 20 00:11:38 fv-az72-309 kernel: vethecca8aa: renamed from eth0 | |
Jun 20 00:11:38 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 7 seen, reloading interface list | |
Jun 20 00:11:38 fv-az72-309 systemd-udevd[6686]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:38 fv-az72-309 systemd-udevd[6686]: Using default interface naming scheme 'v245'. | |
Jun 20 00:11:38 fv-az72-309 systemd-udevd[6686]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:38 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered disabled state | |
Jun 20 00:11:38 fv-az72-309 kernel: device vethfb268c1 left promiscuous mode | |
Jun 20 00:11:38 fv-az72-309 kernel: docker0: port 1(vethfb268c1) entered disabled state | |
Jun 20 00:11:38 fv-az72-309 systemd-networkd[532]: vethfb268c1: Link DOWN | |
Jun 20 00:11:38 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '8' we don't know about, ignoring. | |
Jun 20 00:11:38 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '8' we don't know about, ignoring. | |
Jun 20 00:11:38 fv-az72-309 networkd-dispatcher[6691]: Interface "vethfb268c1" not found. | |
Jun 20 00:11:38 fv-az72-309 systemd[1]: networkd-dispatcher.service: Got notification message from PID 6691, but reception only permitted for main PID 674 | |
Jun 20 00:11:38 fv-az72-309 networkd-dispatcher[674]: ERROR:Failed to get interface "vethfb268c1" status: Command '['/usr/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'vethfb268c1']' returned non-zero exit status 1. | |
Jun 20 00:11:39 fv-az72-309 systemd[1]: run-docker-netns-05eaf9a0ca49.mount: Succeeded. | |
Jun 20 00:11:39 fv-az72-309 systemd[1]: var-lib-docker-overlay2-c5545e751edee2d4c5f1da7b235db9235e2b92dbdc11130852f9deee263df41b-merged.mount: Succeeded. | |
Jun 20 00:11:39 fv-az72-309 systemd-networkd[532]: docker0: Lost carrier | |
Jun 20 00:11:42 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16\x2dinit-merged.mount: Succeeded. | |
Jun 20 00:11:42 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16-merged.mount: Succeeded. | |
Jun 20 00:11:42 fv-az72-309 systemd-udevd[6734]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:42 fv-az72-309 systemd-udevd[6734]: Using default interface naming scheme 'v245'. | |
Jun 20 00:11:42 fv-az72-309 systemd-udevd[6734]: veth37c7cf5: Could not generate persistent MAC: No data available | |
Jun 20 00:11:42 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 9 seen, reloading interface list | |
Jun 20 00:11:42 fv-az72-309 systemd-networkd[532]: vethc0cce7e: Link UP | |
Jun 20 00:11:42 fv-az72-309 kernel: br-d1e9d479f443: port 1(vethc0cce7e) entered blocking state | |
Jun 20 00:11:42 fv-az72-309 kernel: br-d1e9d479f443: port 1(vethc0cce7e) entered disabled state | |
Jun 20 00:11:42 fv-az72-309 kernel: device vethc0cce7e entered promiscuous mode | |
Jun 20 00:11:42 fv-az72-309 systemd-udevd[6735]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:11:42 fv-az72-309 systemd-udevd[6735]: Using default interface naming scheme 'v245'. | |
Jun 20 00:11:42 fv-az72-309 systemd-udevd[6735]: vethc0cce7e: Could not generate persistent MAC: No data available | |
Jun 20 00:11:42 fv-az72-309 dockerd[4625]: time="2022-06-20T00:11:42.578170727Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" | |
Jun 20 00:11:42 fv-az72-309 dockerd[4625]: time="2022-06-20T00:11:42.578198129Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]" | |
Jun 20 00:11:42 fv-az72-309 dockerd[4625]: time="2022-06-20T00:11:42.669328106Z" level=warning msg="path in container /dev/fuse already exists in privileged mode" container=a20c82aba6259bc7ba2452946286e99f59b49b1905483f4512c4f453a3631e0b | |
Jun 20 00:11:42 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:42.686498758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 | |
Jun 20 00:11:42 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:42.686550262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 | |
Jun 20 00:11:42 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:42.686564864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 | |
Jun 20 00:11:42 fv-az72-309 containerd[4465]: time="2022-06-20T00:11:42.686740377Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/a20c82aba6259bc7ba2452946286e99f59b49b1905483f4512c4f453a3631e0b pid=6821 runtime=io.containerd.runc.v2 | |
Jun 20 00:11:42 fv-az72-309 kernel: eth0: renamed from veth37c7cf5 | |
Jun 20 00:11:42 fv-az72-309 systemd-networkd[532]: vethc0cce7e: Gained carrier | |
Jun 20 00:11:42 fv-az72-309 systemd-networkd[532]: br-d1e9d479f443: Gained carrier | |
Jun 20 00:11:42 fv-az72-309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc0cce7e: link becomes ready | |
Jun 20 00:11:42 fv-az72-309 kernel: br-d1e9d479f443: port 1(vethc0cce7e) entered blocking state | |
Jun 20 00:11:42 fv-az72-309 kernel: br-d1e9d479f443: port 1(vethc0cce7e) entered forwarding state | |
Jun 20 00:11:42 fv-az72-309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): br-d1e9d479f443: link becomes ready | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: run-docker-runtime\x2drunc-moby-a20c82aba6259bc7ba2452946286e99f59b49b1905483f4512c4f453a3631e0b-runc.r3Nxcj.mount: Succeeded. | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16-merged-usr-lib-modules.mount: Succeeded. | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16-merged-etc-hosts.mount: Succeeded. | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16-merged-etc-hostname.mount: Succeeded. | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16-merged-etc-resolv.conf.mount: Succeeded. | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: var-lib-docker-overlay2-27b21e55169967d0d88fe85abc53300b79e020a26c271308cc67ef3a17972f16-merged-var.mount: Succeeded. | |
Jun 20 00:11:43 fv-az72-309 systemd-journald[180]: Received client request to flush runtime journal. | |
Jun 20 00:11:43 fv-az72-309 systemd[1]: run-docker-runtime\x2drunc-moby-a20c82aba6259bc7ba2452946286e99f59b49b1905483f4512c4f453a3631e0b-runc.W6g6fL.mount: Succeeded. | |
Jun 20 00:11:44 fv-az72-309 systemd-networkd[532]: br-d1e9d479f443: Gained IPv6LL | |
Jun 20 00:11:44 fv-az72-309 systemd-networkd[532]: vethc0cce7e: Gained IPv6LL | |
Jun 20 00:12:02 fv-az72-309 sudo[9090]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/apt install -y qemu qemu-kvm libvirt-daemon libvirt-clients bridge-utils virt-manager libvirt-daemon-system | |
Jun 20 00:12:02 fv-az72-309 sudo[9090]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:12:14 fv-az72-309 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) | |
Jun 20 00:12:14 fv-az72-309 kernel: IPVS: Connection hash table configured (size=4096, memory=64Kbytes) | |
Jun 20 00:12:14 fv-az72-309 kernel: IPVS: ipvs loaded. | |
Jun 20 00:12:14 fv-az72-309 kernel: IPVS: [rr] scheduler registered. | |
Jun 20 00:12:14 fv-az72-309 kernel: IPVS: [wrr] scheduler registered. | |
Jun 20 00:12:14 fv-az72-309 kernel: IPVS: [sh] scheduler registered. | |
Jun 20 00:12:15 fv-az72-309 kernel: docker0: port 1(vethf3c0380) entered blocking state | |
Jun 20 00:12:15 fv-az72-309 kernel: docker0: port 1(vethf3c0380) entered disabled state | |
Jun 20 00:12:15 fv-az72-309 kernel: device vethf3c0380 entered promiscuous mode | |
Jun 20 00:12:15 fv-az72-309 kernel: eth0: renamed from veth76d993e | |
Jun 20 00:12:15 fv-az72-309 kernel: docker0: port 1(vethf3c0380) entered blocking state | |
Jun 20 00:12:15 fv-az72-309 kernel: docker0: port 1(vethf3c0380) entered forwarding state | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:15 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:20 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:21 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:21 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:21 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:21 fv-az72-309 groupadd[10938]: group added to /etc/group: name=rdma, GID=127 | |
Jun 20 00:12:21 fv-az72-309 groupadd[10938]: group added to /etc/gshadow: name=rdma | |
Jun 20 00:12:21 fv-az72-309 groupadd[10938]: new group: name=rdma, GID=127 | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: Starting QEMU KVM preparation - module, ksm, hugepages... | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: Finished QEMU KVM preparation - module, ksm, hugepages. | |
Jun 20 00:12:22 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:23 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:23 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:23 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:23 fv-az72-309 groupadd[11140]: group added to /etc/group: name=libvirt, GID=128 | |
Jun 20 00:12:23 fv-az72-309 groupadd[11140]: group added to /etc/gshadow: name=libvirt | |
Jun 20 00:12:23 fv-az72-309 groupadd[11140]: new group: name=libvirt, GID=128 | |
Jun 20 00:12:23 fv-az72-309 useradd[11150]: new user: name=libvirt-qemu, UID=64055, GID=108, home=/var/lib/libvirt, shell=/usr/sbin/nologin, from=none | |
Jun 20 00:12:23 fv-az72-309 chage[11158]: changed password expiry for libvirt-qemu | |
Jun 20 00:12:23 fv-az72-309 chfn[11162]: changed user 'libvirt-qemu' information | |
Jun 20 00:12:24 fv-az72-309 groupadd[11171]: group added to /etc/group: name=libvirt-qemu, GID=64055 | |
Jun 20 00:12:24 fv-az72-309 groupadd[11171]: group added to /etc/gshadow: name=libvirt-qemu | |
Jun 20 00:12:24 fv-az72-309 groupadd[11171]: new group: name=libvirt-qemu, GID=64055 | |
Jun 20 00:12:24 fv-az72-309 gpasswd[11178]: user libvirt-qemu added by root to group libvirt-qemu | |
Jun 20 00:12:24 fv-az72-309 gpasswd[11188]: user runneradmin added by root to group libvirt | |
Jun 20 00:12:24 fv-az72-309 groupadd[11196]: group added to /etc/group: name=libvirt-dnsmasq, GID=129 | |
Jun 20 00:12:24 fv-az72-309 groupadd[11196]: group added to /etc/gshadow: name=libvirt-dnsmasq | |
Jun 20 00:12:24 fv-az72-309 groupadd[11196]: new group: name=libvirt-dnsmasq, GID=129 | |
Jun 20 00:12:24 fv-az72-309 useradd[11204]: new user: name=libvirt-dnsmasq, UID=118, GID=129, home=/var/lib/libvirt/dnsmasq, shell=/usr/sbin/nologin, from=none | |
Jun 20 00:12:24 fv-az72-309 chage[11212]: changed password expiry for libvirt-dnsmasq | |
Jun 20 00:12:24 fv-az72-309 chfn[11216]: changed user 'libvirt-dnsmasq' information | |
Jun 20 00:12:24 fv-az72-309 audit[11337]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="virt-aa-helper" pid=11337 comm="apparmor_parser" | |
Jun 20 00:12:24 fv-az72-309 kernel: audit: type=1400 audit(1655683944.647:34): apparmor="STATUS" operation="profile_load" profile="unconfined" name="virt-aa-helper" pid=11337 comm="apparmor_parser" | |
Jun 20 00:12:24 fv-az72-309 audit[11343]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirtd" pid=11343 comm="apparmor_parser" | |
Jun 20 00:12:24 fv-az72-309 kernel: audit: type=1400 audit(1655683944.719:35): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirtd" pid=11343 comm="apparmor_parser" | |
Jun 20 00:12:24 fv-az72-309 audit[11343]: AVC apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirtd//qemu_bridge_helper" pid=11343 comm="apparmor_parser" | |
Jun 20 00:12:24 fv-az72-309 kernel: audit: type=1400 audit(1655683944.723:36): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirtd//qemu_bridge_helper" pid=11343 comm="apparmor_parser" | |
Jun 20 00:12:24 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:25 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:26 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Starting Libvirt local socket. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Listening on Libvirt local socket. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Listening on Libvirt local read-only socket. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Created slice Virtual Machine and Container Slice. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Listening on Libvirt admin socket. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Listening on Virtual machine lock manager socket. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Listening on Virtual machine log manager socket. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Condition check resulted in Virtual Machine and Container Storage (Compatibility) being skipped. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Starting Virtual Machine and Container Registration Service... | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Started Virtual Machine and Container Registration Service. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Starting Virtualization daemon... | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Started Virtualization daemon. | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '11' we don't know about, ignoring. | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '11' we don't know about, ignoring. | |
Jun 20 00:12:27 fv-az72-309 networkd-dispatcher[674]: WARNING:Unknown index 11 seen, reloading interface list | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '11' we don't know about, ignoring. | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: rtnl: received neighbor for link '11' we don't know about, ignoring. | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: virbr0-nic: Link UP | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: virbr0-nic: Gained carrier | |
Jun 20 00:12:27 fv-az72-309 kernel: virbr0: port 1(virbr0-nic) entered blocking state | |
Jun 20 00:12:27 fv-az72-309 kernel: virbr0: port 1(virbr0-nic) entered disabled state | |
Jun 20 00:12:27 fv-az72-309 kernel: device virbr0-nic entered promiscuous mode | |
Jun 20 00:12:27 fv-az72-309 systemd-udevd[11620]: Using default interface naming scheme 'v245'. | |
Jun 20 00:12:27 fv-az72-309 systemd-udevd[11620]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:12:27 fv-az72-309 systemd-udevd[11619]: Using default interface naming scheme 'v245'. | |
Jun 20 00:12:27 fv-az72-309 systemd-udevd[11619]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:27 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: virbr0: Link UP | |
Jun 20 00:12:27 fv-az72-309 kernel: virbr0: port 1(virbr0-nic) entered blocking state | |
Jun 20 00:12:27 fv-az72-309 kernel: virbr0: port 1(virbr0-nic) entered listening state | |
Jun 20 00:12:27 fv-az72-309 dnsmasq[11724]: started, version 2.80 cachesize 150 | |
Jun 20 00:12:27 fv-az72-309 dnsmasq[11724]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth nettlehash DNSSEC loop-detect inotify dumpfile | |
Jun 20 00:12:27 fv-az72-309 dnsmasq-dhcp[11724]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h | |
Jun 20 00:12:27 fv-az72-309 dnsmasq-dhcp[11724]: DHCP, sockets bound exclusively to interface virbr0 | |
Jun 20 00:12:27 fv-az72-309 dnsmasq[11724]: reading /etc/resolv.conf | |
Jun 20 00:12:27 fv-az72-309 dnsmasq[11724]: using nameserver 127.0.0.53#53 | |
Jun 20 00:12:27 fv-az72-309 kernel: virbr0: port 1(virbr0-nic) entered disabled state | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: virbr0-nic: Link DOWN | |
Jun 20 00:12:27 fv-az72-309 systemd-networkd[532]: virbr0-nic: Lost carrier | |
Jun 20 00:12:27 fv-az72-309 dnsmasq[11724]: read /etc/hosts - 8 addresses | |
Jun 20 00:12:27 fv-az72-309 dnsmasq[11724]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses | |
Jun 20 00:12:27 fv-az72-309 dnsmasq-dhcp[11724]: read /var/lib/libvirt/dnsmasq/default.hostsfile | |
Jun 20 00:12:28 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:28 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:28 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:28 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:28 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:28 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:29 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: Reloading. | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network-online.target ignored (device units cannot be delayed). | |
Jun 20 00:12:30 fv-az72-309 systemd[1]: dev-disk-cloud-azure_resource\x2dpart1.device: Requested dependency After=network.target ignored (device units cannot be delayed). | |
Jun 20 00:12:31 fv-az72-309 systemd[1]: Reached target Libvirt guests shutdown. | |
Jun 20 00:12:31 fv-az72-309 systemd[1]: Starting Suspend/Resume Running libvirt Guests... | |
Jun 20 00:12:31 fv-az72-309 systemd[1]: Listening on Virtual machine lock manager admin socket. | |
Jun 20 00:12:31 fv-az72-309 systemd[1]: Listening on Virtual machine log manager socket. | |
Jun 20 00:12:31 fv-az72-309 systemd[1]: Finished Suspend/Resume Running libvirt Guests. | |
Jun 20 00:12:36 fv-az72-309 dbus-daemon[658]: [system] Reloaded configuration | |
Jun 20 00:12:39 fv-az72-309 sudo[9090]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:12:39 fv-az72-309 sudo[12639]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/systemctl restart libvirtd | |
Jun 20 00:12:39 fv-az72-309 sudo[12639]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: Stopping Virtualization daemon... | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: libvirtd.service: Succeeded. | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: Stopped Virtualization daemon. | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: libvirtd.service: Found left-over process 11724 (dnsmasq) in control group while starting unit. Ignoring. | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: libvirtd.service: Found left-over process 11725 (dnsmasq) in control group while starting unit. Ignoring. | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: Starting Virtualization daemon... | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: Started Virtualization daemon. | |
Jun 20 00:12:39 fv-az72-309 sudo[12639]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:12:39 fv-az72-309 sudo[12668]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/dd of=/etc/apparmor.d/usr.sbin.libvirtd | |
Jun 20 00:12:39 fv-az72-309 sudo[12668]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:12:39 fv-az72-309 sudo[12668]: pam_unix(sudo:session): session closed for user root | |
Jun 20 00:12:39 fv-az72-309 sudo[12672]: runner : TTY=unknown ; PWD=/home/runner/work/molecule-kubevirt/molecule-kubevirt ; USER=root ; COMMAND=/usr/bin/systemctl reload apparmor.service | |
Jun 20 00:12:39 fv-az72-309 sudo[12672]: pam_unix(sudo:session): session opened for user root by (uid=0) | |
Jun 20 00:12:39 fv-az72-309 systemd[1]: Reloading Load AppArmor profiles. | |
Jun 20 00:12:39 fv-az72-309 apparmor.systemd[12676]: Restarting AppArmor | |
Jun 20 00:12:39 fv-az72-309 apparmor.systemd[12676]: Reloading AppArmor profiles | |
Jun 20 00:12:39 fv-az72-309 audit[12686]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=12686 comm="apparmor_parser" | |
Jun 20 00:12:39 fv-az72-309 audit[12686]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=12686 comm="apparmor_parser" | |
Jun 20 00:12:39 fv-az72-309 audit[12686]: AVC apparmor="S |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment