Skip to content

Instantly share code, notes, and snippets.

View archerslaw's full-sized avatar
:octocat:
Focusing

Archers Law archerslaw

:octocat:
Focusing
View GitHub Profile
@archerslaw
archerslaw / Glusterfs quick start in QEMU.
Last active August 29, 2015 14:00
Glusterfs quick start in QEMU.
1.GlusterFS is an open source, distributed file system capable of scaling to several petabytes (actually, 72 brontobytes!) and handling thousands of clients. GlusterFS clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design and can deliver exceptional performance for diverse workloads. Goal of this translator is to use logical volumes to store VM images and expose them as files to QEMU/KVM.
2.Quick start.
1).Install gluster server.
glusterfs-server
https://brewweb.devel.redhat.com/buildinfo?buildID=350720
2).Create brick.
# mkdir -p /home/brick1
3).Start gluster service.
# /bin/systemctl start glusterd.service
@archerslaw
archerslaw / virtio-nic multi-queue support - qemu-kvm
Created April 21, 2014 06:11
virtio-nic multi-queue support - qemu-kvm
1.Boot the guest with multiple queues(queues=4) nic.
e.g:...-device virtio-net-pci,netdev=dev1,mac=9a:e8:e9:ea:eb:ec,id=net1,vectors=9,mq=on
-netdev tap,id=dev1,vhost=on,script=/etc/qemu-ifup-switch,queues=4
2.Using ethtool -L enable mq in guest
[Guest] # ethtool -L eth0 combined 4
3.Using ethtool -l eth0 can see the channel parameters of the interface like:
Pre-set maximums:
# ethtool -l eth0
@archerslaw
archerslaw / nbd storage backend in qemu.
Last active June 16, 2024 13:26
nbd storage backend in qemu.
Network Block Device(nbd):
In Linux, a network block device is a device node whose content is provided by a remote machine. Typically, network block devices are used to access a storage device that does not physically reside in the local machine but on a remote one. As an example, the local machine can access a fixed disk that is attached to another computer.
1.start nbd-server to export a qcow2 image with absolute path on the NBD server host.
# nbd-server 12345 /home/my-data-disk.qcow2
2.launch a KVM guest with this exported image as a data disk.
# qemu-img info nbd:10.66.83.171:12345
image:
file format: qcow2
virtual size: 10G (10737418240 bytes)
@archerslaw
archerslaw / storage_vm_migration_nbd.
Created April 25, 2014 05:57
storage_vm_migration_nbd.
Note: Despite image file in destination host is qcow2/raw, the format in mirroring command should be raw, check after storage vm migration, the image file in destination should not change.
1.boot qemu-kvm on des host with a empty disk (do two times, once with raw, once with qcow2) and "-incoming tcp:0:$port_cli,server,nowait" and qmp connection.
2.on des host, create NBD server, and export the empty disk:
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "nbd-server-start", "arguments": { "addr": { "type": "inet", "data": { "host": "$dest_ip_addr", "port": "$port" } } } }
{"return": {}}
{ "execute": "nbd-server-add", "arguments": { "device": "drive-virtio-disk0", "writable": true } }
{"return": {}}
@archerslaw
archerslaw / Min&max vCPUs and Min&max Memory supported by guests.
Created April 28, 2014 07:02
Min&max vCPUs and Min&max Memory supported by guests.
RHEL guests:
====================================
max mem vCPU
------------------------------------
RHEL3.9-32 16G 16
------------------------------------
RHEL4.9-32 16G 16
------------------------------------
RHEL4.9-64 256G 16
------------------------------------
@archerslaw
archerslaw / create disk image via qemu-img.
Created April 28, 2014 07:03
create disk image via qemu-img.
i=1
while [ $i -lt 1024 ]
do
qemu-img create -f qcow2 /home/disk/disk$i.qcow2 1G
i=$(($i+1))
done
@archerslaw
archerslaw / hotplug 1024 scsi-hd disk to one virtio-scsi-pci controller.
Created April 28, 2014 07:04
hotplug 1024 scsi-hd disk to one virtio-scsi-pci controller.
#...-monitor unix:/tmp/monitor2,server,nowait -device virtio-scsi-pci,id=bus1
i=0
j=0
while [ $i -lt 1024 ]
do
j=$((i%255))
sleep $((1+i/1000))
echo "drive_add localhost file=/home/disk/disk$i.qcow2,format=qcow2,media=disk,id=scsi$i,if=none" | nc -U /tmp/monitor2
echo "device_add scsi-hd,bus=bus1.0,drive=scsi$i,scsi-id=$j,id=hd$i" |nc -U /tmp/monitor2
i=$(($i+1))
@archerslaw
archerslaw / 1 LUN, 256 targets, 1 virtio-scsi controller.
Created April 28, 2014 09:15
1 LUN, 256 targets, 1 virtio-scsi controller.
#cat cli-target
cli="/usr/libexec/qemu-kvm -M pc -m 24G -smp 12 -cpu SandyBridge -vnc :1 -monitor stdio -boot menu=on -monitor unix:/tmp/monitor,server,nowait -drive file=/root/RHEL7.0.qcow2,if=none,id=blk1 -device virtio-blk-pci,scsi=off,drive=blk1,id=blk-disk1,bootindex=1 -netdev tap,id=netdev1,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=netdev1,mac=02:03:04:05:06:00,id=net-pci1"
cli="$cli -device virtio-scsi-pci,id=scsi0"
count=$((${1:-1}-1))
for i in $(seq 0 $count)
do
echo $i
cli="$cli -drive file=/home/disk/disk$i,if=none,id=disk$i"
cli="$cli -device scsi-hd,bus=scsi0.0,drive=disk$i,id=target$i,scsi-id=$i,lun=0"
done
@archerslaw
archerslaw / random test with 1024 disks assigned to random controllers, targets, LUNs.
Created April 28, 2014 09:23
random test with 1024 disks assigned to random controllers, targets, LUNs.
i=1;while [ $i -lt 1300 ]; do qemu-img create -f qcow2 /home/disk/disk$i 1G;i=$(($i+1));done
ulimit -n 10240
cli="/usr/libexec/qemu-kvm -M pc -m 24G -smp 12 -cpu SandyBridge -vnc :1 -monitor stdio -boot menu=on -monitor unix:/tmp/monitor,server,nowait -drive file=/root/RHEL7.0.qcow2,if=none,id=blk1 -device virtio-blk-pci,scsi=off,drive=blk1,id=blk-disk1,bootindex=1 -netdev tap,id=netdev1,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=netdev1,mac=02:03:04:05:06:00,id=net-pci1"
cli="$cli -device virtio-scsi-pci,id=scsi0"
cli="$cli -device virtio-scsi-pci,id=scsi1"
count=$((${1:-1}-1))
for i in $(seq 0 $count)
do
j=$((2*$i))
@archerslaw
archerslaw / virtio-scsi multipath testing.
Created April 29, 2014 05:13
virtio-scsi multipath testing.
NOTE: only for raw image and pass-through SCSI disk support it, scsi_id command was provided by Udev is now part of systemd and is installed by default.(e.g.: /usr/lib/udev/scsi_id ...).
# yum install device-mapper.x86_64 device-mapper-multipath.x86_64
# modprobe dm-multipath dm-round-robin
# service multipathd start
Starting multipathd daemon: [ OK ]
# chkconfig multipathd on //ensure sure that the multipath daemon starts on bootup
# /sbin/mpathconf --enable
# ls -l /etc/multipath.conf
-rw-------. 1 root root 2754 Jan 9 18:55 /etc/multipath.conf