Skip to content

Instantly share code, notes, and snippets.

@TyeolRik
Created October 1, 2024 23:07
Show Gist options
  • Save TyeolRik/1217d626dcca3d5e6226de0bf4c225f4 to your computer and use it in GitHub Desktop.
Save TyeolRik/1217d626dcca3d5e6226de0bf4c225f4 to your computer and use it in GitHub Desktop.
GFS2 설치가이드
현재: ProLinux 8.6
/dev/sdc 가 공유볼륨이라는 사실은
1. DB1 에서 dd if=/dev/urandom of=/dev/sdc bs=4K count=1 날림.
2. DB1, DB2 에서 hexdump /dev/sdc -C 로 같은 값이 나오는지 확인.
3. 공유볼륨 확인 완료.
모든 노드에 대해
1. dnf install lvm2-lockd gfs2-utils dlm-lib -y
2. /etc/hosts 에 10.0.2.22 GFS-DB1 / 10.0.2.21 GFS-DB2 추가
3. /etc/yum.repos.d/CentOS-Stream-HighAvailability.repo 추가. mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=HighAvailability
4. /etc/yum.repos.d/CentOS-Stream-ResilientStorage.repo 추가. mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=ResilientStorage
5. /etc/yum.repos.d/CentOS-Stream-AppStream.repo 추가. mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=AppStream
6. dnf install pcs pacemaker fence-agents-all lvm2-lockd gfs2-utils dlm -y --enablerepo=ha --enablerepo=resilientstorage --enablerepo=centos_appstream --nogpgcheck
7. passwd hacluster
8. pcs host auth GFS-DB1 GFS-DB2
9. pcs cluster setup gfs2_cluster --start GFS-DB1 GFS-DB2
10. pcs cluster start --all; pcs cluster enable --all
2노드 전부
11-1. vi /etc/lvm/lvm.conf 에서 use_lvmlockd = 1 로 값 변경 (주석 제거 및 0 → 1로 값 변경)
11-2. vi /etc/lvm/lvm.conf 에서 use_devicesfile = 1 로 값 변경 (주석 제거 및 0 → 1로 값 변경)
1노드에서
12. pcs property set no-quorum-policy=freeze
34. pcs property set stonith-enabled=false
13. pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence
14. pcs resource clone locking interleave=true
15. pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence
2노드 전부
16. systemctl restart lvmlockd dlm
17. pcs resource refresh
18. pcs status --full
1노드에서
19. vgcreate --shared shared_vg1 /dev/sdc
Physical volume "/dev/sdc" successfully created.
Volume group "shared_vg1" successfully created
VG shared_vg1 starting dlm lockspace
Starting locking. Waiting until locks are ready...
20. lvmdevices --adddev /dev/sdc
21. vgchange --lockstart shared_vg1
VG shared_vg1 starting dlm lockspace
Starting locking. Waiting until locks are ready...
22. lvcreate --activate sy -L500G -n shared_lv1 shared_vg1
Logical volume "shared_lv1" created.
23. lvcreate --activate sy -L500G -n shared_lv2 shared_vg1
Logical volume "shared_lv2" created.
24. mkfs.gfs2 -j2 -p lock_dlm -t gfs2_cluster:gfs2-demo1 /dev/shared_vg1/shared_lv1
/dev/shared_vg1/shared_lv1 is a symbolic link to /dev/dm-2
This will destroy any data on /dev/dm-2
Are you sure you want to proceed? [y/n] y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done
Building resource groups: Done
Creating quota file: Done
Writing superblock and syncing: Done
Device: /dev/shared_vg1/shared_lv1
Block size: 4096
Device size: 500.00 GB (131072000 blocks)
Filesystem size: 500.00 GB (131071997 blocks)
Journals: 2
Journal size: 128MB
Resource groups: 2001
Locking protocol: "lock_dlm"
Lock table: "gfs2_cluster:gfs2-demo1"
UUID: eda73b20-6fb5-4b37-b3e5-6f98bc61d483
25. mkfs.gfs2 -j2 -p lock_dlm -t gfs2_cluster:gfs2-demo2 /dev/shared_vg1/shared_lv2
/dev/shared_vg1/shared_lv2 is a symbolic link to /dev/dm-3
This will destroy any data on /dev/dm-3
Are you sure you want to proceed? [y/n] y
Discarding device contents (may take a while on large devices): Done
Adding journals: Done
Building resource groups: Done
Creating quota file: Done
Writing superblock and syncing: Done
Device: /dev/shared_vg1/shared_lv2
Block size: 4096
Device size: 500.00 GB (131072000 blocks)
Filesystem size: 500.00 GB (131071997 blocks)
Journals: 2
Journal size: 128MB
Resource groups: 2001
Locking protocol: "lock_dlm"
Lock table: "gfs2_cluster:gfs2-demo2"
UUID: e1666325-a1e5-4088-843f-9d8f3974f07a
26. pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd
27. pcs resource create sharedlv2 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv2 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd
28. pcs resource clone shared_vg1 interleave=true
29. pcs constraint order start locking-clone then shared_vg1-clone
Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start)
30. pcs constraint colocation add shared_vg1-clone with locking-clone
모든 노드에서
31. lvs # 잘 되는지 확인
노드1에서
32. pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/shared_vg1/shared_lv1" directory="/mnt/gfs1" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence # 절대 /etc/fstab 에 올리지 말 것
33. pcs resource create sharedfs2 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/shared_vg1/shared_lv2" directory="/mnt/gfs2" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence
모든 노드에서
35. pcs resource refresh
36. pcs resource status
* Clone Set: locking-clone [locking]:
* Started: [ GFS-DB1 GFS-DB2 ]
* Clone Set: shared_vg1-clone [shared_vg1]:
* Started: [ GFS-DB1 GFS-DB2 ]
37. mount | grep gfs2
/dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime)
/dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment