- マシンは2台、同型機(glus01, glus02 | HDD 各1発)、同構成
- LVMでGlusterFS用にパーティションを切り、 /data配下に data1〜4として配置
- 各dataN直下をBrickにしようと思ったが、lost+foundが邪魔だと判断したため、更にdataNディレクトリを掘った
- volumeはstripe(2+2)で作成
- 検証中で3回目だったからvol3
- geo-replicationの検証をしてしていた(geo-replicationを止めてもtouchはできないままだった)
- geo-replication自体は意図する挙動のように見えた
- touchコマンドは、mount先にcdで移動して実行
- mkdir, vimで編集はOK, touch, echo "hoge" > hoge.txtはNG(タイミングの問題?)
- touchでNGでも同じコマンドを根気よく何度も叩くと通るときもあった
- 既にVolumeを作りなおしてしまったのでstatus, infoはlogよりサルベージしたもの
glusterfs-geo-replication-3.2.2-1.el6.x86_64
glusterfs-fuse-3.2.2-1.el6.x86_64
glusterfs-core-3.2.2-1.el6.x86_64
glusterfs-rdma-3.2.2-1.el6.x86_64
fuse-2.8.3-3.el6_1.x86_64
rsync-3.0.8-1.el6.x86_64 <-- Fedora15よりbackportしたもの
# gluster volume create vol3 stripe 2 glus01:/data/data1/data1 glus01:/data/data2/data2 glus02:/data/data1/data1 glus02:/data/data2/data2
# gluster volume geo-replication vol3 ssh://root@glus01:file:///root/geo-r start
# mount.glusterfs glus01:vol3 /mnt/vol3
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_glus01-lv_root
ext4 50G 1.3G 46G 3% /
tmpfs tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 ext4 485M 31M 430M 7% /boot
/dev/mapper/vg_glus01-lv_data1
ext4 20G 173M 19G 1% /data/data1
/dev/mapper/vg_glus01-lv_data2
ext4 20G 173M 19G 1% /data/data2
/dev/mapper/vg_glus01-lv_data3
ext4 20G 172M 19G 1% /data/data3
/dev/mapper/vg_glus01-lv_data4
ext4 20G 172M 19G 1% /data/data4
/dev/mapper/vg_glus01-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_glus01-lv_data1 on /data/data1 type ext4 (rw)
/dev/mapper/vg_glus01-lv_data2 on /data/data2 type ext4 (rw)
/dev/mapper/vg_glus01-lv_data3 on /data/data3 type ext4 (rw)
/dev/mapper/vg_glus01-lv_data4 on /data/data4 type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
Volume Name: vol3
Type: Distributed-Stripe
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: glus01:/data/data1/data1
Brick2: glus01:/data/data2/data2
Brick3: glus02:/data/data1/data1
Brick4: glus02:/data/data2/data2
Options Reconfigured:
geo-replication.indexing: on
Given volfile:
+------------------------------------------------------------------------------+
1: volume vol3-client-0
2: type protocol/client
3: option remote-host glus01
4: option remote-subvolume /data/data1/data1
5: option transport-type tcp
6: end-volume
7:
8: volume vol3-client-1
9: type protocol/client
10: option remote-host glus01
11: option remote-subvolume /data/data2/data2
12: option transport-type tcp
13: end-volume
14:
15: volume vol3-client-2
16: type protocol/client
17: option remote-host glus02
18: option remote-subvolume /data/data1/data1
19: option transport-type tcp
20: end-volume
21:
22: volume vol3-client-3
23: type protocol/client
24: option remote-host glus02
25: option remote-subvolume /data/data2/data2
26: option transport-type tcp
27: end-volume
28:
29: volume vol3-stripe-0
30: type cluster/stripe
31: subvolumes vol3-client-0 vol3-client-1
32: end-volume
33:
34: volume vol3-stripe-1
35: type cluster/stripe
36: subvolumes vol3-client-2 vol3-client-3
37: end-volume
38:
39: volume vol3-dht
40: type cluster/distribute
41: subvolumes vol3-stripe-0 vol3-stripe-1
42: end-volume
43:
44: volume vol3-write-behind
45: type performance/write-behind
46: subvolumes vol3-dht
47: end-volume
48:
49: volume vol3-read-ahead
50: type performance/read-ahead
51: subvolumes vol3-write-behind
52: end-volume
53:
54: volume vol3-io-cache
55: type performance/io-cache
56: subvolumes vol3-read-ahead
57: end-volume
58:
59: volume vol3-quick-read
60: type performance/quick-read
61: subvolumes vol3-io-cache
62: end-volume
63:
64: volume vol3-stat-prefetch
65: type performance/stat-prefetch
66: subvolumes vol3-quick-read
67: end-volume
68:
69: volume vol3
70: type debug/io-stats
71: option latency-measurement off
72: option count-fop-hits off
73: subvolumes vol3-stat-prefetch
74: end-volume
+------------------------------------------------------------------------------+