Last active
September 27, 2016 17:29
-
-
Save hermes-pimentel/5b34f809f2cf6e99dfc5f817834cb717 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
==== GLUSTERFS CENTOS 6 ==== | |
========repo====== | |
vim /etc/yum.repos.d/gluster.repo | |
[rhel6.8-gluster] | |
name=RHEL 6.8 gluster repository | |
baseurl=http://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.8/ | |
gpgcheck=0 | |
enabled=1 | |
yum repolist | |
yum update | |
yum install glusterfs-server -y | |
=========== | |
# architecture overview # | |
http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Architecture/ | |
#installation reference | |
http://gluster.readthedocs.io/en/latest/Install-Guide/Configure/ | |
# install repo # | |
yum install wget -y | |
wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm | |
rpm -ivh epel-release-6-8.noarch.rpm | |
yum update -y | |
#For Centos intall special repo for gluster# | |
yum install centos-release-gluster -y | |
# install GlusterFS # | |
yum install glusterfs-server -y | |
# install XFS # | |
yum install xfsprogs -y | |
# on ec2 env disable iptables # | |
service iptables stop | |
chkconfig iptables off | |
# Change security groups to allow glusterfs to connect to other nodes # | |
- simple add the <all trafic> to security group name or | |
- add this ports to SG | |
TCP | |
24007 – Gluster Daemon | |
24008 – Management | |
24009 and greater (GlusterFS versions less than 3.4) OR | |
49152 (GlusterFS versions 3.4 and later) – Each brick for every volume on your host requires it’s own port. For every new brick, one new port will be used starting at 24009 for GlusterFS versions below 3.4 and 49152 for version 3.4 and above. If you have one volume with two bricks, you will need to open 24009 – 24010 (or 49152 – 49153). | |
38465 – 38467 – this is required if you by the Gluster NFS service. | |
The following ports are TCP and UDP: | |
111 – portmapper | |
#start gluster service on all nodes# | |
service glusterd start | |
# create trusted pool (all servers that will be part of cluster) - | |
# gluster peer probe (hostname of the other server in the cluster, or IP address if you don’t have DNS or /etc/hosts entries) | |
#run only on server 1 | |
gluster peer probe server1 --dont need | |
gluster peer probe server2 | |
gluster peer probe server3 | |
gluster peer probe 172.30.99.211 | |
gluster peer probe 172.30.99.95 | |
gluster peer probe 172.30.99.153 | |
# check pool on all servers # | |
- list all | |
gluster pool list | |
- status | |
gluster peer status | |
# Initialize devices (all servers) # | |
- in this example the device is /dev/xvdb | |
-create partition | |
fdisk /dev/xvdb | |
mkfs.xfs -i size=512 /dev/xvdb1 | |
# Mount the partition as a Gluster "brick" (all servers) # | |
mkdir -p /export/xvdb1 | |
mount /dev/xvdb1 /export/xvdb1 | |
- add to fstab | |
echo "/dev/xvdb1 /export/xvdb1 xfs defaults 0 0" >> /etc/fstab | |
# setup a gluster volume (one server) # | |
- set replica to 3 means that all nodes will have a copy of each file | |
gluster volume create gv0 replica 3 172.30.99.211:/export/xvdb1/brick1 172.30.99.95:/export/xvdb1/brick1 172.30.99.153:/export/xvdb1/brick1 | |
- from doc "Breaking this down into pieces, the first part says to create a gluster volume named gv0 (the name is arbitrary, gv0 was chosen simply because it’s less typing than gluster_volume_0). Next, we tell it to make the volume a replica volume, and to keep a copy of the data on at least 2 bricks at any given time. Since we only have two bricks total, this means each server will house a copy of the data. Lastly, we specify which nodes to use, and which bricks on those nodes. The order here is important when you have more bricks…it is possible (as of the most current release as of this writing, Gluster 3.3) to specify the bricks in a such a way that you would make both copies of the data reside on a single node. This would make for an embarrassing explanation to your boss when your bulletproof, completely redundant, always on super cluster comes to a grinding halt when a single point of failure occurs." | |
-check status | |
gluster volume info | |
-start volume | |
gluster volume start gv0 | |
# mount brick on /var/www # | |
mkdir -p /var/www (all nodes) | |
echo "172.30.99.211:gv0 /var/www glusterfs defaults,_netdev 0 0" >> /etc/fstab | |
echo "172.30.99.95:gv0 /var/www glusterfs defaults,_netdev 0 0" >> /etc/fstab | |
echo "172.30.99.153:gv0 /var/www glusterfs defaults,_netdev 0 0" >> /etc/fstab | |
mount -t glusterfs 172.30.99.211:gv0 /var/www | |
# set parameters to performance # | |
performance.write-behind-window-size – the size in bytes to use for the per file write behind buffer. Default: 1MB. | |
performance.cache-refresh-timeout – the time in seconds a cached data file will be kept until data revalidation occurs. Default: 1 second | |
performance.cache-size – the size in bytes to use for the read cache. Default: 32MB. | |
cluster.stripe-block-size – the size in bytes of the unit that will be read from or written to on the GlusterFS volume. Smaller values are better for smaller files and larger sizes for larger files. Default: 128KB. | |
performance.io-thread-count – is the maximum number of threads used for IO. Higher numbers improve concurrent IO operations, providing your disks can keep up. Default: 16. | |
gluster volume set gv0 performance.write-behind-window-size 4MB | |
gluster volume set gv0 performance.cache-refresh-timeout 4 | |
gluster volume set gv0 performance.cache-size 512MB | |
gluster volume set gv0 performance.cache-max-file-size 2MB | |
gluster volume set gv0 performance.io-thread-count 32 | |
# delete volume gv0 # | |
gluster volume stop gv0 | |
gluster volume delete gv0 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment