Skip to content

Instantly share code, notes, and snippets.

@mb00g
Last active January 13, 2020 03:49
Show Gist options
  • Save mb00g/dd9d81042b666019fc215afbce0418f9 to your computer and use it in GitHub Desktop.
Save mb00g/dd9d81042b666019fc215afbce0418f9 to your computer and use it in GitHub Desktop.

Monitoring and Health

Status summary

# ceph -s
  cluster:
    id:     17055c93-db33-4af8-8dbf-6b64928220df
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-5,ceph-6,ceph-7
    mgr: ceph-5(active)
    mds: mycephfs-1/1/1 up  {0=ceph-6=up:active}, 2 up:standby
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   2 pools, 44 pgs
    objects: 482  objects, 1.8 GiB
    usage:   5.7 GiB used, 66 GiB / 72 GiB avail
    pgs:     44 active+clean

Disk usage overview, global and per pool

# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    72 GiB     66 GiB      5.7 GiB          7.96
POOLS:
    NAME            ID     USED        %USED     MAX AVAIL     OBJECTS
    cephfs_meta     4      232 KiB         0        21 GiB          23
    cephfs_data     5      1.8 GiB      4.11        42 GiB         459

Displays disk usage linked to the CRUSH tree

# ceph osd df tree
ID CLASS WEIGHT  REWEIGHT SIZE   USE     DATA    OMAP   META     AVAIL  %USE VAR  PGS TYPE NAME
-1       0.07018        - 72 GiB 5.7 GiB 2.7 GiB 52 KiB  3.0 GiB 66 GiB 7.96 1.00   - root default
-3       0.02339        - 24 GiB 1.9 GiB 933 MiB 15 KiB 1024 MiB 22 GiB 7.96 1.00   -     host ceph-5
 0   hdd 0.02339  1.00000 24 GiB 1.9 GiB 933 MiB 15 KiB 1024 MiB 22 GiB 7.96 1.00  44         osd.0
-5       0.02339        - 24 GiB 1.9 GiB 933 MiB 18 KiB 1024 MiB 22 GiB 7.96 1.00   -     host ceph-6
 1   hdd 0.02339  1.00000 24 GiB 1.9 GiB 933 MiB 18 KiB 1024 MiB 22 GiB 7.96 1.00  44         osd.1
-7       0.02339        - 24 GiB 1.9 GiB 932 MiB 19 KiB 1024 MiB 22 GiB 7.96 1.00   -     host ceph-7
 2   hdd 0.02339  1.00000 24 GiB 1.9 GiB 932 MiB 19 KiB 1024 MiB 22 GiB 7.96 1.00  44         osd.2
                    TOTAL 72 GiB 5.7 GiB 2.7 GiB 52 KiB  3.0 GiB 66 GiB 7.96
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00

Pool

Create Pool

ceph osd pool create {pool-name} {pg-num} [{pgp-num}]

# ceph osd pool create njajal 32 32
pool 'njajal' created

List Pools

# ceph osd lspools
4 cephfs_meta
5 cephfs_data
6 njajal

Rename pools

ceph osd pool rename {current-pool-name} {new-pool-name}

# ceph osd pool rename njajal njajal_rename
pool 'njajal' renamed to 'njajal_rename'

Get & Set Pools Values

ceph osd pool get {pool-name} {key}
ceph osd pool set {pool-name} {key} {value}

size >> Sets the number of replicas for objects in the pool.
min_size >> Sets the minimum number of replicas required for I/O. 
pgp_num >> The effective number of placement groups to use when calculating data placement.

Delete Pools

ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]

# ceph osd pool delete njajal_rename njajal_rename --yes-i-really-really-mean-it
pool 'njajal_rename' removed

RBD Block Storage

Crate RBD

rbd create --size {megabytes} {pool-name}/{image-name}

# rbd create --size 1024 nganu_pool/nganu_rbd

Listing Block Device Images

rbd ls {poolname}

# rbd ls nganu_pool
nganu_rbd

Retrieving Image Information

rbd info {pool-name}/{image-name}

# rbd info nganu_pool/nganu_rbd
rbd image 'nganu_rbd':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        id: 1be456b8b4567
        block_name_prefix: rbd_data.1be456b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Thu Dec 26 12:05:41 2019

RBD Resize

rbd resize --size 2048 foo (to increase)
rbd resize --size 2048 foo --allow-shrink (to decrease)

# rbd resize --size 2048 nganu_pool/nganu_rbd
Resizing image: 100% complete...done.

# rbd info nganu_pool/nganu_rbd
rbd image 'nganu_rbd':
        size 2 GiB in 512 objects

Delete RBD

rbd rm {pool-name}/{image-name}

# rbd rm nganu_pool/nganu_rbd
Removing image: 100% complete...done.

Mount RBD from ceph-client

rbd create -p nganu_pool rbd_mount --size 4096

> Mapping
rbd map {image-name} --pool {pool-name}

rbd map rbd_mount --pool nganu_pool
/dev/rbd0

> Format
mkfs.ext4 /dev/rbd0

> Mount
mount /dev/rbd0 /mnt/cephrbd

df -h /mnt/cephrbd/
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       3.9G   16M  3.6G   1% /mnt/cephrbd

Ceph Dashboard

> Enable module
ceph mgr module enable dashboard

> Set login user
ceph dashboard set-login-credentials mb00g <isi_password>

> Set IP listen
ceph config set mgr mgr/dashboard/server_addr <ip_address>

> Set port listen
ceph config set mgr mgr/dashboard/server_port <port>

Ceph File System

https://gist.github.com/mb00g/7b6d96b6f1108622326f2a8428f225aa

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment