Skip to content

Instantly share code, notes, and snippets.

@oki
Created April 26, 2022 17:34
Show Gist options
  • Save oki/a22c3f8b41dea686960d5dfee9b6ae50 to your computer and use it in GitHub Desktop.
Save oki/a22c3f8b41dea686960d5dfee9b6ae50 to your computer and use it in GitHub Desktop.
root@ceph-admin ~ # ceph -s
cluster:
id: 9545503e-bf20-11ec-bd53-51523cfe5c45
health: HEALTH_WARN
OSD count 2 < osd_pool_default_size 3
services:
mon: 3 daemons, quorum ceph-osd3,ceph-osd2,ceph-mon (age 2h)
mgr: ceph-admin.qyqfni(active, since 5d), standbys: ceph-osd3.yhbita
osd: 2 osds: 2 up (since 5d), 2 in (since 5d)
data:
pools: 2 pools, 33 pgs
objects: 23.43k objects, 91 GiB
usage: 178 GiB used, 222 GiB / 400 GiB avail
pgs: 33 active+clean
root@ceph-admin ~ # ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 400 GiB 222 GiB 178 GiB 178 GiB 44.38
TOTAL 400 GiB 222 GiB 178 GiB 178 GiB 44.38
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 16 KiB 2 32 KiB 0 101 GiB
TestPool 2 32 88 GiB 23.43k 176 GiB 46.56 101 GiB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment