Skip to content

Instantly share code, notes, and snippets.

@hyukishi
Forked from githubfoam/Gluster Cheat Sheet
Last active February 11, 2021 15:23
Show Gist options
  • Select an option

  • Save hyukishi/3f4a238918d26968a05dc06ffda3d6c3 to your computer and use it in GitHub Desktop.

Select an option

Save hyukishi/3f4a238918d26968a05dc06ffda3d6c3 to your computer and use it in GitHub Desktop.
Gluster Cheat Sheet
GlusterFS Cheat Sheet
# Gluster community download
https://download.gluster.org/pub/gluster/glusterfs/
### Add a new host to the brick ###
## Probe new host ##
gluster probe peer <hostname or IP>
# E.G. gluster probe peer debian-master
## Join new host to brick ##
gluster volume add-brick <brick name> replica <increment previous value by 1 or set to total number of hosts> <hostname>:/<brick location>
# E.G. gluster volume add-brick webdata replica 5 debian-master:/webdata
#Brick –> is basic storage (directory) on a server in the trusted storage pool.
#Volume –> is a logical collection of bricks.
#Cluster –> is a group of linked computers, working together as a single computer.
#Distributed File System –> A filesystem in which the data is spread across the multiple storage nodes and allows the clients to access it over a network.
#Client –> is a machine which mounts the volume.
#Server –> is a machine where the actual file system is hosted in which the data will be stored.
#Replicate –> Making multiple copies of data to achieve high redundancy.
#Fuse –> is a loadable kernel module that lets non-privileged users create their own file systems without editing kernel code.
#glusterd –> is a daemon that runs on all servers in the trusted storage pool.
#RAID –> Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy
#TCP ports 111, 24007,24008 on all Gluster servers
#TCP port 24009-(24009 + number of bricks across all volumes) on all Gluster servers
#TCP port 24009 to 24014 #-> 5 bricks for each
glusterfs -V #-> Check the version of installed glusterfs
gluster #-> Gluster Console Manager in interactive mode
sudo vi /etc/hosts #-> modify /etc/hosts file if DNS is N\A
192.168.13.16 gluster1.storage.local gluster1
192.168.13.17 gluster2.storage.local gluster2
192.168.13.20 client.storage.local client
gluster peer status #-> Verify the status of the trusted storage pool
gluster peer probe gluster2-server #-> Add servers to the trusted storage pool
gluster peer detach gluster2-server #-> Remove a server in storage pool
gluster pool list #-> List the storage pool.
mkdir -p /data/gluster/gvol0 #-> Create a brick (directory) called “gvol0” in the mounted file system on both nodes
gluster volume create gvol0 replica 2 gluster1.storage.local:/data/gluster/gvol0 gluster2.storage.local:/data/gluster/gvol0
volume create: gvol0 #-> Create the volume named “gvol0” with two replicas
gluster volume start gvol0 #-> Start volume
gluster volume info #-> Show the volume information
gluster volume info gvol0 #-> Show the volume information of volume gvol0
gluster volume start test-volume #-> Start volume
mkfs.ext4 /dev/sdb1 #-> Format partition
mkdir -p /data/gluster #-> Create directory called /data/gluster
mount /dev/sdb1 /data/gluster #-> Mount the disk on a directory called /data/gluster
mount -t glusterfs gluster1-server:/test-volume /mnt/glusterfs #-> Mount a Gluster volume on all Gluster servers
cat /proc/mounts | grep glusterfs
#/etc/fstab
storage.example.lan:/test-volume /mnt glusterfs defaults,_netdev 0 0
gluster1-server:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0 #-> Edit the /etc/fstab file on all Gluster servers
echo "/dev/sdb1 /data/gluster ext4 defaults 0 0" | sudo tee --append /etc/fstab #->Add an entry to /etc/fstab
sudo iptables -I INPUT -p all -s <ip-address> -j ACCEPT #-> Configure the firewall to allow all connections within a cluster
Redhat Based Systems
chkconfig glusterd on #-> Start the glusterd daemon every time the system boots
Debian Based Systems
sudo systemctl start glusterd #->Enable the glusterd service on all gluster nodes
sudo systemctl start glusterd #->Start the glusterd service on all gluster nodes
Clients
dmesg | grep -i fuse #-> Verify FUSE module is installed
mkdir -p /mnt/glusterfs #-> Create a directory to mount the GlusterFS filesystem
mount -t glusterfs gluster1.storage.local:/gvol0 /mnt/glusterfs #-> Mount the GlusterFS filesystem to /mnt/glusterfs
df -hP /mnt/glusterfs #-> Verify the mounted GlusterFS filesystem
gluster1.storage.local:/gvol0 /mnt/glusterfs glusterfs defaults,_netdev 0 0 #-> Add to /etc/fstab for automatically mounting
Benchmarking && Testing
Servers
mount -t glusterfs gluster1.storage.local:/gvol0 /mnt #-> Mount GlusterFS volume on the same storage node
/mnt directory #-> Data inside the /mnt directory of both nodes will always be same (replication).
ls -l /mnt/ #-> Verify the created files
poweroff #-> Shutdown gluster node to test HA on client
Clients
touch /mnt/glusterfs/file1 #-> Create some files on the mounted filesystem
ls -l /mnt/glusterfs/ #-> Verify the created files
Tuning
gluster volume set gvol0 network.ping-timeout "5" #-> set network ping timeout to 5 seconds from default 42 on all gluster nodes
gluster volume get gvol0 network.ping-timeout #-> Verify network ping timeout
network.ping-timeout default 42 Secs#-> The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided.
RDMA
Process glusterd will listen on both tcp and rdma if rdma device is found. Port used for rdma is 24008.
troubleshooting
sudo glusterd --debug
sudo netstat -ntlp | grep gluster
netstat -tlpn | grep 24007
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment