Skip to content

Instantly share code, notes, and snippets.

@kmadac
Last active December 16, 2022 09:57
Show Gist options
  • Save kmadac/171a5b84a6b64700f163c716f5028f90 to your computer and use it in GitHub Desktop.
Save kmadac/171a5b84a6b64700f163c716f5028f90 to your computer and use it in GitHub Desktop.
Vagrant environment for Ceph cluster

How to prepare dev/test Ceph environment

When you are using Ceph in production, it is important to have environment where you can test your upcoming upgrades, configuration changes, integration of new clusters or any other significant changes without touching real production clusters. Such environment can be simply built with the tool called Vagrant, which can very quickly build virtualized environment describe in one relatively simple config file.

We are using Vagrant on Linux with libvirt and hostmanager plugins. Libvirt is a toolkit to manage Linux KVM VMs. Vagrant can also create virtualized networks to interconnect those Vms as well as storage devices, so you can have almost identical copy of your production cluster if you need it.

Let‘s create 5 nodes Ceph cluster. All nodes First 3 nodes will be dedicated for control node daemons, all nodes will also be OSD nodes (2 x 10gb disks on each node be default), and one node will be client node. Client nodes can be used for testing access to cluster services. Mapping rbd images, moutning CephFS filesystems, accessing RGW buckets or whatecer you like. Host machine where virtualized environment can be any machine with Linux (Ubuntu 22.04 in our case) with KVM virtualization enabled.

user@hostmachine:~/$ kvm-ok 
INFO: /dev/kvm exists
KVM acceleration can be used

Install required packages:

sudo apt-get install qemu libvirt-daemon-system libvirt-clients ebtables dnsmasq-base
sudo apt-get install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
sudo apt-get install libguestfs-tools
sudo apt-get install build-essential

Install vagrant according the steps on official installation page: https://developer.hashicorp.com/vagrant/downloads

Then we need to install vagrant plugins:

vagrant plugin install vagrant-libvirt vagrant-hostmanager 

If there is no ssh keypair in ~/.ssh, generate one. This keypair will be injected into the Vms, because cephadm which we will use for Ceph deployment needs ssh connectivity between VMs and this keypair will used for ssh authentication between nodes.

ssh-keygen

Now we should be prepared to start your virtual environment on the machine.

mkdir ceph-vagrant; cd ceph-vagrant
wget https://gist.githubusercontent.com/kmadac/171a5b84a6b64700f163c716f5028f90/raw/1cd844197c3b765571e77c58c98759db77db7a75/Vagrantfile

vagrant up

When vagrant up ends without any error, ceph will be installed in the background for couple of more minutes. You can check deployment progress by accessing ceph shell on node0:

vagrant ssh vagrant ssh ceph1-node0
vagrant@ceph1-node0:~$ sudo cephadm shell
root@ceph1-node0:/# ceph -W cephadm –watch-debug

at the end you should get healthy ceph cluster with 3 MON daemons, and 6 OSD deamons:

root@ceph1-node0:/# ceph -s
  cluster:
    id:     774c4454-7d1e-11ed-91a2-279e3b86d070
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1-node0,ceph1-node1,ceph1-node2 (age 13m)
    mgr: ceph1-node0.yxrsrj(active, since 21m), standbys: ceph1-node1.oqrkhf
    osd: 6 osds: 6 up (since 12m), 6 in (since 13m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   33 MiB used, 60 GiB / 60 GiB avail
    pgs:     1 active+clean

Now your cluster is up and running and you can install additional services like CephFS or RGW, you can play with adding/removing nodes, upgrading to next version. By changing CLUSTER_ID variable in Vagrantfile and copying Vagrantfile to another directory, you can deploy second cluster and try to setup replication (rbd-mirror, cephfs-mirror, RGW multizone zonfiguration) between clusters. You are only constrained by the boundaries of your imagination.

When you are done with your tests, you can simply destroy the environment with

vagrant destroy -f
# vi: set ft=ruby :
#
# In order to reduce the need of recreating all vagrant boxes everytime they
# get dirty, snaptshot them and revert the snapshot of them instead.
# Two helpful scripts to do this easily can be found here:
# https://github.com/Devp00l/vagrant-helper-scripts
require 'json'
configFileName = 'vagrant.config.json'
CONFIG = File.file?(configFileName) && JSON.parse(File.read(File.join(File.dirname(__FILE__), configFileName)))
def getConfig(name, default)
down = name.downcase
up = name.upcase
CONFIG && CONFIG[down] ? CONFIG[down] : (ENV[up] ? ENV[up].to_i : default)
end
CLIENTS = getConfig('CLIENTS', 1)
NODES = getConfig('NODES', 3)
DISKS = getConfig('DISKS', 2)
CLUSTER_ID = 1
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.network "private_network", type: "dhcp"
config.vm.box = "generic/ubuntu2004"
config.hostmanager.manage_host = false
config.hostmanager.manage_guest = true
config.hostmanager.ignore_private_ip = false
config.hostmanager.include_offline = true
(0..CLIENTS - 1).each do |i|
config.vm.define "ceph#{CLUSTER_ID}-client#{i}" do |cl|
cl.vm.provider :libvirt do |domain|
domain.memory = 1024
domain.cpus = 1
end
cl.vm.hostname = "ceph#{CLUSTER_ID}-client#{i}"
end
end
(0..NODES - 1).each do |i|
config.vm.define "ceph#{CLUSTER_ID}-node#{i}" do |node|
node.vm.hostname = "ceph#{CLUSTER_ID}-node#{i}"
node.vm.provider :libvirt do |domain|
domain.memory = 4096
domain.cpus = 2
(0..DISKS - 1).each do |d|
domain.storage :file, :size => '10G', :device => "vd#{(98+d).chr}#{i}"
end
end
if i == 0
node.vm.provision "shell", inline: <<-SHELL
sudo wget https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm -P /root && chmod +x /root/cephadm
sudo /root/cephadm add-repo --release pacific && /root/cephadm install
sudo mkdir -p /etc/ceph
sudo bash -x -c "export IPA=`ip addr show eth0 | grep 'inet\ ' | awk '{print \\$2}' | cut -d/ -f1` && /root/cephadm bootstrap --skip-firewalld --ssh-private-key /root/.ssh/id_rsa --ssh-public-key /root/.ssh/id_rsa.pub --mon-ip \\$IPA && cephadm shell ceph orch host add ceph#{CLUSTER_ID}-node1 && cephadm shell ceph orch host add ceph#{CLUSTER_ID}-node2 && cephadm shell ceph orch apply mon --placement=3 && cephadm shell ceph orch apply osd --all-available-devices"
SHELL
end
end
end
config.vm.provision :hostmanager
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/id_rsa.pub"
config.vm.provision "file", source: "~/.ssh/id_rsa", destination: "~/.ssh/id_rsa"
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/id_rsa.pub >> /home/vagrant/.ssh/authorized_keys
sudo cp -r /home/vagrant/.ssh /root/.ssh
SHELL
config.vm.provision "shell", inline: <<-SHELL
sudo sed -i 's/weekly/daily/' /etc/logrotate.conf
sudo sed -i 's/rotate\ 4/rotate\ 2/' /etc/logrotate.conf
sudo sed -i 's/#compress/compress/' /etc/logrotate.conf
sudo apt update; apt install -y python3 ca-certificates s3cmd
source /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
sudo apt update; apt install -y podman
SHELL
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment