Skip to content

Instantly share code, notes, and snippets.

@serihiro
Last active June 26, 2018 01:16
Show Gist options
  • Save serihiro/8005e6f8d6073d08e89f88b94f9295e5 to your computer and use it in GitHub Desktop.
Save serihiro/8005e6f8d6073d08e89f88b94f9295e5 to your computer and use it in GitHub Desktop.
GlusterFS3.12をCentOS7で使う場合のメモ

1. GlusterFS3.12のインストール

yum -y update
yum -y install centos-release-gluster312
yum -y install glusterfs gluster-cli glusterfs-libs glusterfs-server
yum clean all

2. glusterdのサービス起動

sudo systemctl enable glusterd.service
sudo systemctl start glusterd.service

3. 起動確認

sudo gluster pool list
UUID					Hostname 	State
28eb8024-0a5f-40dc-a176-09f0bbd66f68	localhost	Connected

4. GlusterFS用のdiskをmountしてdirectoryを作る

  • これは各ノードで実施する必要がある
sudo parted -s -a optimal /dev/sdb mklabel msdos -- mkpart primary xfs 1 -1
sudo mkfs.xfs -i size=512 /dev/sdb1
sudo mkdir -p /glfs/vols

sudo echo <<EOS>>/etc/fstab
/dev/sdb1 /glfs/vols xfs defaults 0 0
EOS

sudo mount /glfs/vols

5. clusterを構築する

  • これは1ノードでやればよい

現時点ではclusterに参加しているマシンは存在しない

sudo gluster peer status
Number of Peers: 0

sudo gluster pool list
UUID					Hostname 	State
4e32bd24-c6bf-4687-9d75-2f61b8e9b8d9	localhost	Connected

クラスタにg2を追加してみる

sudo gluster peer probe g2
peer probe: success.

sudo gluster peer status
Number of Peers: 1

Hostname: g2
Uuid: 441c6214-bd07-4fde-b99e-b0e7bd4e75fb
State: Peer in Cluster (Connected)

sudo gluster pool list
UUID					Hostname 	State
441c6214-bd07-4fde-b99e-b0e7bd4e75fb	g2       	Connected
4e32bd24-c6bf-4687-9d75-2f61b8e9b8d9	localhost	Connected

6. Volumeを追加する

  • これは1ノードでやればよい
  • repulica数が2だと警告が出るが一応成功する。安全のためにはレプリカを3つ以上にしよう。
sudo gluster volume create data replica 2 g1:/glfs/vols/data g2:/glfs/vols/data
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: data: success: please start the volume to access data
sudo gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: 367964de-bad3-4e50-996b-9c4ee734af47
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: g1:/glfs/vols/data
Brick2: g2:/glfs/vols/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
sudo gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick g1:/glfs/vols/data                    49152     0          Y       4172
Brick g2:/glfs/vols/data                    49152     0          Y       4062
Self-heal Daemon on localhost               N/A       N/A        Y       4193
Self-heal Daemon on g2                      N/A       N/A        Y       4083

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

7. 書き込んでみる

  • g1のローカルディレクトリにg2のbrikをmountしてみる
sudo mount -t glusterfs g2:/data  /mnt
cd /mnt
echo 'hamutarou' > mattakunanoda.txt
  • g2で /glfs/vols/data をlsするとこのtextが出来ている
[vagrant@g2 data]$ ls -lsatr
total 4
0 drwxr-xr-x.  3 root    root     18 Jun 22 10:55 ..
0 drw-------. 12 root    root    210 Jun 22 11:22 .glusterfs
0 drwxrwxrwx.  3 root    root     49 Jun 22 11:22 .
4 -rw-rw-r--.  2 vagrant vagrant  10 Jun 22 11:22 mattakunanoda.txt

8. mountしているnodeを落としてみる

  • g2を殺したらどうなるか
vagrant halt g2 -f
  • g1でvolumeステータスを見るとg2が参加しているノードから消えている
[vagrant@g1 mnt]$sudo gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick g1:/glfs/vols/data                    49152     0          Y       4172
Self-heal Daemon on localhost               N/A       N/A        Y       4193

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
  • この状態でもg1は/mntに書き込むことはできる。g1:/glfs/vols/data/にfailbackしていることがわかる。
  • ここでg2を復活させてみる
[vagrant@g1 mnt]$ sudo gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick g1:/glfs/vols/data                    49152     0          Y       4172
Brick g2:/glfs/vols/data                    49152     0          Y       1140
Self-heal Daemon on localhost               N/A       N/A        Y       4193
Self-heal Daemon on g2                      N/A       N/A        Y       929

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

9. 3台目のレプリカを追加する

  • glusterdの起動と専用ディスクのマウントは他と同じ

  • clusterにg3を追加

[vagrant@g1 ~]$ sudo gluster peer probe g3
peer probe: success.
  • g3のbricksをvolumeに追加
[vagrant@g1 ~]$ sudo gluster volume add-brick data replica 3 g3:/glfs/vols/data
volume add-brick: success
  • 確認
[vagrant@g13 ~]$ sudo gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick g1:/glfs/vols/data                    49152     0          Y       1147
Brick g2:/glfs/vols/data                    49152     0          Y       1147
Brick g3:/glfs/vols/data                    49152     0          Y       4218
Self-heal Daemon on localhost               N/A       N/A        Y       4239
Self-heal Daemon on g2                      N/A       N/A        Y       3478
Self-heal Daemon on g1                      N/A       N/A        Y       3446

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

10. g3が死んだ状態でg1,g2にファイルを追加

  • g1, g2にhealが必要なファイルが2つあることがわかる
[vagrant@g1 ~]$ sudo gluster vol heal data info
Brick g1:/glfs/vols/data
/mnist_test_seq.npy
/
Status: Connected
Number of entries: 2

Brick g2:/glfs/vols/data
/mnist_test_seq.npy
/
Status: Connected
Number of entries: 2

Brick g3:/glfs/vols/data
Status: Transport endpoint is not connected
Number of entries: -
  • g3のglusterdを復活させる
[vagrant@g1 ~]$ sudo gluster vol heal data info
Brick g1:/glfs/vols/data
/mnist_test_seq.npy
Status: Connected
Number of entries: 1

Brick g2:/glfs/vols/data
/mnist_test_seq.npy
Status: Connected
Number of entries: 1

Brick g3:/glfs/vols/data
Status: Connected
Number of entries: 0
[vagrant@g1 ~]$ sudo gluster vol heal data info
Brick g1:/glfs/vols/data
Status: Connected
Number of entries: 0

Brick g2:/glfs/vols/data
Status: Connected
Number of entries: 0

Brick g3:/glfs/vols/data
Status: Connected
Number of entries: 0

11. さらばg3

$ sudo gluster volume remove-brick data replica 2 g3:/glfs/vols/data force

再度addするとすでにVolumeに追加済みというエラーが出るので以下の手順が必要であった

http://nuke.hateblo.jp/entry/20121128/1354085790

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment