When setting up your network, you may find yourself continuously asking "Why?".
"Why?", you will ask, "am I not monopolizing the modern CPU virtualization technology of Intel and AMD processors, known as Intel-VT and AMD-V?".
It is a question many of us face at some point in our lives, as commonplace as "Did I add that 2nd dryer sheet to the laundry?" or, "Where does the evolutionary progression of mankind stand in relation to the macrocosm?"
The good news is that setting up a virtual machine is much simpler than you think! That's because this "modern" technology has actually been around for over a decade, there are just many out there who have not yet de-hermitted from their physical realm. And that's ok too.
A Kernel-based Virtual Machine, hereafter KVM is the infrastructure used to turn a Linux kernel into a hypervisor, also called a VMM (Virtual Machine Manager), to manage it.
Think of the main (physical) computer as a host machine, and each virtual instance running on it as a guest machine. KVM itself does not perform any emulation. Instead, it exposes the /dev/kvm interface to bootstrap, map guest display to the host, and/or feed simulated I/O to the guest.
There are other reasons why KVM is not for everyone. It's dirtier to work with than intuitive VMware or Virtualbox products, so not necessarily the best for a casual nerd looking to make his video games run faster. In terms of GUI management, KVM is not very cuddly, but . But for the bulk of your configuration, KVM functions through the colorless void that is command-line scripting. And while not as pretty, this allows for better raw efficiency, and more control.
You can get pseudo container style vms like OpenVZ, that are more of an extension of the host network than true virtualization. In terms of CPU performance OpenVZ is typically faster because while it is reliant on the performance of the host node kernel, it has less overhead to deal with.
So on that note, let's begin in earnest.
To get started, you first want to make sure your host computer has a virtualization-compatible CPU. This can be checked thusly:
$ egrep -c '(vmx|svm)' --color=always /proc/cpuinfo
If you get back results with vmx, then you have an Intel processor. If you get back results with svm, then you have an AMD processor. If you receive a null return, then your processor is not built for hardware supported full virtualization. The xen approach, used in the CentOS 5 series, supports para virtualization.
If you are running on a modern CPU chances are the virtualization features are included by default, but in some Intel processors, Intel VT-x may be disabled via a BIOS or EUFI firmware settings, which are used to perform hardware initialization during the booting process.
If that is the case follow this guide to fix it. Be sure to cold power-cycle the machine after enabling.
CentOS 6 has native KVM support in the base distro. You can check the meta packages contained in:
$ yum grouplist | grep -i virt
Install any packages you might need, in this case:
$ yum -y install @virt* dejavu-lgc-* xorg-x11-xauth tigervnc \
libguestfs-tools policycoreutils-python bridge-utils
Now allow packet forwarding between interfaces.
$ sed -i 's/^\(net.ipv4.ip_forward =\).*/\1 1/' /etc/sysctl.conf; sysctl -p
Should return
#] net.ipv4.ip_forward = 1
#] net.ipv4.tcp_syncookies = 1
You now have to option to either configure your NAT based connectivity (routing) or set up bridging:
Routing:
libvirtd is a daemon tool used to manage hypervisors.
Config libvirtd service to start automatically and reboot.
$ chkconfig libvirtd on; shutdown -r now
Bridging:
Optionally, you can set up bridging, which allows guests to have a network adaptor on the same physical lan as the host. In this example eth0 is the device to support the bridge and br0 is the new device.
$ chkconfig network on
$ service network restart
$ yum -y erase NetworkManager
$ cp -p /etc/sysconfig/network-scripts/ifcfg-{eth0,br0}
$ sed -i -e'/HWADDR/d' -e'/UUID/d' -e's/eth0/br0/' -e's/Ethernet/Bridge/' \
$ /etc/sysconfig/network-scripts/ifcfg-br0
$ echo DELAY=0 >> /etc/sysconfig/network-scripts/ifcfg-br0
$ echo 'BOOTPROTO="none"' >> /etc/sysconfig/network-scripts/ifcfg-eth0
$ echo BRIDGE=br0 >> /etc/sysconfig/network-scripts/ifcfg-eth0
$ service network restart
$ brctl show
The host is now ready to start creating kvm guests!
Because there are a bunch of variables that will be needed to create the guest, it's good to set them as variables as we go along. The first will be reviewing the OS variants:
$ virt-install --os-variant=list | more
Select one of the OS options:
$ OS="--os-variant=freebsd8"
$ OS="--os-variant=win7"
$ OS="--os-variant=win7 --disk path=/var/lib/libvirt/iso/virtio-win.iso,device=cdrom"
$ OS="--os-variant=win2k8"
$ OS="--os-variant=win2k8 --disk path=/var/lib/libvirt/iso/virtio-win.iso,device=cdrom"
$ OS="--os-variant=rhel6"
Select a network option, replacing the MAC address if needed:
$ Net="--network bridge=br0"
$ Net="--network model=virtio,bridge=br0"
$ Net="--network model=virtio,mac=52:54:00:00:00:00"
$ Net="--network model=virtio,bridge=br0,mac=52:54:00:00:00:00"
52:54:00 is the default MAC address for QEMU or KVMs
Select a disk option, replacing the filename and size with desired values:
$ Disk="--disk /vm/Name.img,size=8"
$ Disk="--disk /var/lib/libvirt/images/Name.img,size=8"
$ Disk="--disk /var/lib/libvirt/images/Name.img,sparse=false,size=8"
$ Disk="--disk /var/lib/libvirt/images/Name.qcow2,sparse=false,bus=virtio,size=8"
$ Disk="--disk vol=pool/volume"
$ Disk="--livecd --nodisks"
$ Disk="--disk /dev/mapper/vg_..."
Select a source (live cd iso, pxe or url):
$ Src="--cdrom=/var/lib/libvirt/iso/iso/..."
$ Src="--pxe"
$ Src="-l http://alt.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/"
$ Src="-l http://download.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/"
$ Src="-l http://ftp.us.debian.org/debian/dists/stable/main/installer-amd64/"
$ Src="-l http://ftp.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/"
$ Src="-l http://download.opensuse.org/distribution/openSUSE-stable/repo/oss/"
$ Src="--location=http://mirror.centos.org/centos/6/os/x86_64"
Select the number of cpus:
$ Cpu="--vcpus=1"
$ Cpu="--vcpus=2"
$ Cpu="--vcpus=4"
Select the amount of ram:
$ Ram="--ram=768"
$ Ram="--ram=1024"
$ Ram="--ram=2048"
Choose a name for the guest:
$ Name="myguest"
Optional arguments
Add a URL for a kickstart file:
$ KS=""
$ KS="-x ks=http://ks.example.com/kickstart/c6-64.ks"
Select a graphics option:
$ Gr=""
$ Gr="--graphics none"
$ Gr="--graphics vnc"
$ Gr="--graphics vnc,password=foo"
$ Gr="--graphics spice"
Now it's finally time to create the guest :D
virt-install $OS $Net $KS $Disk $Src $Gr $Cpu $Ram --name=$Name
ERROR Error in network device parameters: Unknown network type None
-- You may get this error due to the default network being inactive. Check it by running
$ sudo virsh net-list --all
Should see something like
Name State Autostart Persistent
--------------------------------------------------
default inactive yes yes
In the offchance this lists nothing, define it and set it to autostart
$ virsh net-define /usr/share/libvirt/networks/default.xml
$ virsh net-autostart default
Then try starting it up
$ sudo virsh net-start default
ERROR Unable to create bridge virbr0: Package not installed
As root, check the debug log and see what might be missing.
$ LIBVIRT_DEBUG=1 libvirtd
In my case it was an unexpectedly terminated libvirtd instance, leaving the old pid file.
#] error : virPidFileAcquirePath:410 : Failed to acquire pid file '/var/run/libvirtd.pid': Resource temporarily unavailable
Another potential cause of this issue is having another instance of libvirtd running. You can figure out which with a trusty grep:
$ pgrep libvirtd
If 2 processes running, kill the one that's not linked to the pidfile with $ kill -9 <pid>
and restart libvirtd
ERROR Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
This may be because libvirtd is not fully setup. Check
$ service libvirtd status
And restart/start both libvirt and the init.d file
$ service libvirtd restart
$ /etc/init.d/libvirtd restart
Now connect to the console:
$ virt-viewer --connect qemu_ssh://myhost/$Name
And finally, you can set this guest up to start automatically whenever the host is booted:
$ virsh autostart $Name