The following options that aren't in the kvm brand should work:
com1
,com2
- Can be set to tty-like devices or
socket,/some/path
. - If both are unset,
com1
defaults to/dev/zconsole
andcom2
defaults to/tmp/vm.ttyb
.
- Can be set to tty-like devices or
bootrom
- Should be set to
/usr/share/bhyve/BHYVE_UEFI.fd
or/usr/share/bhyve/BHYVE_UEFI_CSM.fd
- Defaults to
/usr/share/bhyve/BHYVE_UEFI_CSM.fd
- Should be set to
bhyve-opts
- Any extra options that should be passed to
bhyve
- Any extra options that should be passed to
Other things that are different:
bhyve
only supports 16 vcpus.- There is no automatic configuration of networking, unless you are using an image with a couple fixes to
cloud-init
. Details on one such image are found below. If not using the fixedcloud-init
, either configure the guest to have networking statically configured or use an external DHCP server. - VNC is not yet supported. You must use a text console.
zlogin -C
is your friend.vmadm console
works too, but^]
seems not to work.
Code is in dev-bhyve
branches of illumos-joyent and smartos-live
Pre-built platform images can be found at /mgerdts/public/bhyve-20180208.
You can install a CentOS 7 image that should work well with this with:
img=462d1d03-8457-e134-a408-cf9ea2b9be96
url=https://us-east.manta.joyent.com/mgerdts/public/bhyve/images/$img
for file in manifest.json disk0.zfs.gz; do
curl -o $file $url/$file
done
imgadm install -m manifest.json -f disk0.zfs.gz
rm manifest.json disk0.zfs.gz
Image creation process below
You can install an Unbuntu 17.10 image that should work well with this with:
img=38396fc7-2472-416b-e61b-d833b32bd088
url=https://us-east.manta.joyent.com/mgerdts/public/bhyve/images/$img
for file in manifest.json disk1.zfs.gz; do
curl -o $file $url/$file
done
imgadm install -m manifest.json -f disk1.zfs.gz
rm manifest.json disk1.zfs.gz
Image creation process below
See b6.json below for an example of a file to use with vmadm install
img=209ec332-16e1-e47b-8e94-b2c57ec497e7
url=https://us-east.manta.joyent.com/mgerdts/public/bhyve/images/$img
for file in manifest.json disk0.zfs.gz; do
curl -o $file $url/$file
done
imgadm install -m manifest.json -f disk0.zfs.gz
rm manifest.json disk0.zfs.gz
Image creation process below
- Most images won't configure networking automatically. See above.
- VNC not yet supported
- Larger memory allocations don't seem to set resource caps high enough to account for overhead. You may need to manually increase (or remove) values in the
capped-memory
resource withzonecfg
.
See /zones/$uuid/root/tmp/zhyve.log
.
Have you run a kvm
instance since your last reboot? To verify that you haven't, be sure you don't see this:
echo hvm_excl_holder::print | mdb -k
0xfffffffff83b6185 "SmartOS KVM"
Currently vmadm
does not calculate the memory overhead required for bhyve
properly. Work around with:
# zonecfg -z $u1
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5> select capped-memory
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5:capped-memory> info
capped-memory:
[physical: 33G]
[swap: 33G]
[locked: 33G]
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5:capped-memory> set physical=64g
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5:capped-memory> set swap=64g
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5:capped-memory> set locked=64g
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5:capped-memory> end
zonecfg:1d5b8e7c-c004-4f69-bf0c-c98918e35bd5> exit
bhyve
is hard coded to support at most 16 vcpus. Change your configuration.
This has been seen when no bootrom
has been specified. Perhaps you converted a kvm
zone to a bhyve
zone and forgot to add this:
# zonecfg -z $u1 info attr name=bootrom
attr:
name: bootrom
type: string
value: /usr/share/bhyve/BHYVE_UEFI_CSM.fd
The example above is for BIOS support. For UEFI, drop the _CSM
.
Inside the zone, there's one process (zhyve
) started by the kernel. The easiest way to debug the process is to incercept the first system call with dtrace
, then attach truss
or mdb
to the process.
Verbosely, with truss
.
#! /usr/sbin/dtrace -ws
/*
* Watch for the first system call performed by the zhyve program and
* then trace that program with truss.
*/
syscall:::entry
/execname == "zhyve"/
{
/* Stop the program */
stop();
/* Use truss to start tracing the program. This causes the program to continue. */
system("truss -t all -v all -w all -p %d", pid);
/* Tell dtrace to exit once truss starts */
exit(0);
}
Or a one-liner with mdb
:
# dtrace -wn 'syscall:::entry / execname == "zhyve" / {stop(); system("mdb -p %d", pid); exit(0);}'
dtrace: description 'syscall:::entry ' matched 235 probes
dtrace: allowing destructive actions
CPU ID FUNCTION:NAME
0 795 systeminfo:entry Loading modules: [ ld.so.1 ]
> $C
ffffbf7fffdff970 ld.so.1`sysinfo+0xa()
ffffbf7fffdffcb0 ld.so.1`setup+0xebd(ffffbf7fffdffe78, ffffbf7fffdffe80, 0, ffffbf7fffdfffd0, 1000, ffffbf7fef3a99c8)
ffffbf7fffdffdd0 ld.so.1`_setup+0x282(ffffbf7fffdffde0, 190)
ffffbf7fffdffe60 ld.so.1`_rt_boot+0x6c()
0000000000000001 0xffffbf7fffdfffc0()
Notice that this catches a system call that happens before main
starts.
Hello!
I would like to ask how to set up cloud-init on SmartOS under the BHYVE brand with a custom ISO installation. I mean that I'm not using the official Joyent image. What I want to do is configure the network with cloud-init in a BHYVE VM. I have installed cloud-init, all related services are enabled, I also set up the serial console in the grub, etc. It seems to be working because under '/var/lib/cloud/instances' the VM UUID is created and I can see the IP settings I specified in the metadata on the SmartOS host in the 'obj.pkl' file, but for some reason, it doesn't execute them. The datasource_list [ SmartOS ] is, of course, set. I checked the official Joyent image and applied the same settings that are in the '/etc/cloud' folder, and the kernel parameters in the grub are the same, yet it doesn't work for me.
Any ideas?
Thanks!