Created
September 28, 2020 03:22
-
-
Save windhamwong/c6ece20f15132081dbd3c7df45d2ff2e to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# This will use osd.5 as an example | |
# Fork from: https://gist.github.com/cheethoe/49d9c1d0003e44423e54a060e0b3fbf1 | |
# ceph commands are expected to be run in the rook-toolbox | |
1) disk fails | |
2) remove disk from node | |
3) mark out osd. `ceph osd out osd.5` | |
4) remove from crush map. `ceph osd crush remove osd.5` | |
5) delete caps. `ceph auth del osd.5` | |
6) remove osd. `ceph osd rm osd.5` | |
7) delete the deployment `kubectl delete deployment -n rook-ceph rook-ceph-osd-id-5` | |
8) delete osd data dir on node `rm -rf /var/lib/rook/osd5` | |
9) edit the osd configmap `kubectl edit configmap -n rook-ceph rook-ceph-osd-nodename-config` | |
9a) comment out the config section pertaining to your osd id and underlying device. | |
10) add new disk and verify node sees it. | |
11) restart the rook-operator pod by deleting the rook-operator pod. `kubectl -n rook-ceph delete pod -l app=rook-ceph-operator` | |
12) osd prepare pods run | |
13) new rook-ceph-osd-id-5 will be created | |
14) check health of your cluster `ceph -s; ceph osd tree` | |
@caisan |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment