Skip to content

Instantly share code, notes, and snippets.

@prateekpandey14
Last active February 27, 2020 20:31
Show Gist options
  • Save prateekpandey14/f2a30b3f246fd5b44fdfb545185f78b4 to your computer and use it in GitHub Desktop.
Save prateekpandey14/f2a30b3f246fd5b44fdfb545185f78b4 to your computer and use it in GitHub Desktop.
Resizing single disk pool

1. Do kubectl exec inside the cstor-pool-mgmt using command and install parted

Get the pool pod name using kubectl get pods -n openebs command and exec inside the container. Install the parted tool using apt-get install parted after execing into the cstor-pool-mgmt container.

$ kubectl exec -it cstor-pool-1fth-7fbbdfc747-sh25t -n openebs -c cstor-pool-mgmt bash

2. zpool list (get the pool name)

$ zpool list
NAME                                         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
cstor-5be1d388-60d3-11e9-8e67-42010aa00fcf  9.94G   220K  9.94G         -     0%     0%  1.00x  ONLINE  -

3. Set your zpool with autoextend on (it defaults to off)

$ zpool set autoexpand=on cstor-5be1d388-60d3-11e9-8e67-42010aa00fcf`

4. Resize the disk used by the pool

If this is done already, that's fine.

5. Get the expanded device name that is in-use with pool using fdisk -l command and use parted /dev/<device-name> to lists partition layout on device. Just after this command, type Fix at prompt to use new available space.

$ parted /dev/sdb print

Warning: Not all of the space available to /dev/sdb appears to be used, you can
fix the GPT to use all of the space (an extra 20971520 blocks) or continue with
the current setting?
Fix/Ignore? Fix
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
 1      1049kB  10.7GB  10.7GB  zfs          zfs-d97901ec3aa0fb69
 9      10.7GB  10.7GB  8389kB

6. Remove the buffering partition

parted /dev/sdb rm 9

7. Expand partition holding zpool

$ parted /dev/sdb resizepart 1 100%

sh: 1: udevadm: not found
sh: 1: udevadm: not found
Information: You may need to update /etc/fstab.

8. Check the parted size again using parted /dev/<device-name> print

$ parted /dev/sdb print

Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
 1      1049kB  21.5GB  21.5GB  zfs          zfs-d97901ec3aa0fb69

9. Size is changed from 10GB to 20GB, Now we have to tell the zpool to bring specified physical device online using following command.

Note: Replace the disk name below by get disk name using zpool status command.

zpool online -e cstor-5be1d388-60d3-11e9-8e67-42010aa00fcf /dev/disk/by-id/scsi-0Google_PersistentDisk_pdisk2

10. Restart the NDM pods schedeuled on same node with pool to reflect the updated size in Disk customresource

After restart make sure NDM pod comes in Running state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment