4
[root@dev-master ceph-cluster]# ceph osd tree
ID WEIGHT  TYPE NAME     UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.01740 root default
-4 0.00580     host osd2
 0 0.00580         osd.0    down        0          1.00000
-5 0.00580     host osd3
 1 0.00580         osd.1    down        0          1.00000
-6 0.00580     host osd1
 2 0.00580         osd.2    down        0          1.00000
 5       0 osd.5              up        0          1.00000
[root@dev-master ceph-cluster]# ceph osd out 5
osd.5 is already out.
[root@dev-master ceph-cluster]# ceph osd crush remove osd.5
device 'osd.5' does not appear in the crush map
[root@dev-master ceph-cluster]# ceph auth del osd.5
entity osd.5 does not exist
[root@dev-master ceph-cluster]# ceph osd rm 5
Error EBUSY: osd.5 is still up; must be down before removal.

But I could not find the osd.5 in any host.

derobert
  • 107,579
  • 20
  • 231
  • 279
Inuyasha
  • 41
  • 2

3 Answers3

2

You could try manually marking the osd down; if the osd process is actually running somewhere, it will mark itself back up after a few seconds.

ceph osd down osd.5; ceph osd rm "$_"

llua
  • 6,760
  • 24
  • 30
0

Sometimes ceph osd purge [osd.daemon] works.

0

After all its a service on the given node so I would suggest going in stopping this service first.

systemctl disable [email protected]

Then go with

ceph osd out osd.5
ceph osd safe-to-destroy osd.5
ceph osd destroy osd.5 --yes-i-really-mean-it
ceph osd crush remove osd.5
ceph osd rm osd.5
PoX
  • 296
  • 2
  • 9