1

I've installed Ceph into a single VM for testing purposes (I'm not testing ceph, per se, but I need a ceph endpoint for the tools I'll be running). The install was easy enough, but the cephfs filesystem I've tried to create (ceph fs volume create tank) is taking forever (where "forever" means "at least 20 minutes so far") to become available. Running ceph mds status shows:

tank:1 {0=tank.ceph.znlhma=up:creating} 1 up:standby

Running ceph status shows:

  cluster:
    id:     0350c95c-e59a-11eb-be4b-52540085de8c
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            Reduced data availability: 64 pgs inactive
            Degraded data redundancy: 64 pgs undersized
            OSD count 1 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum ceph.storage (age 50m)
    mgr: ceph.storage.pealqx(active, since 50m)
    mds: 1/1 daemons up, 1 standby
    osd: 1 osds: 1 up (since 50m), 1 in (since 58m)

  data:
    volumes: 1/1 healthy
    pools:   2 pools, 64 pgs
    objects: 0 objects, 0 B
    usage:   6.1 MiB used, 100 GiB / 100 GiB avail
    pgs:     100.000% pgs not active
             64 undersized+peered

That's mostly expected, except for the "1 MDSs report slow metadata IOs", which I guess is symptomatic. There is a single OSD, on a 100GB virtual disk (/dev/vdb), which is backed by a btrfs filesystem on an NVME disk.

Is there anything I can do to make this faster to create? I don't need Ceph to be particularly performant, but I would like to be able to bring this test environment up in less time.

larsks
  • 32,449
  • 5
  • 54
  • 70

1 Answers1

1

The answer is that ceph, by default, won't activate an OSD with fewer than three replicas. The solution is to set the min_size for the corresponding pool, like this:

ceph osd pool set cephfs.tank.meta min_size 1
ceph osd pool set cephfs.tank.data min_size 1

(Replacing cephfs.tank.meta and cephfs.tank.data with whatever pool names are appropriate in your environment.)

larsks
  • 32,449
  • 5
  • 54
  • 70