3

Since as btrfs doesn't track bad blocks, as a work-around this btrfs mailing list post suggested using an underlying mdadm RAID0 configuration for badblocks support.

Could LVM be used instead of mdadm for this purpose?

Tom Hale
  • 28,728
  • 32
  • 139
  • 229
  • 1
    If your bad blocks are not too spread over the whole disk you could simply create some bad1, bad2, ... logical volumes to avoid using them for the good LVs. But I would rather recommend to get a new HD. – rudimeier Apr 30 '17 at 11:19

1 Answers1

8

In general, as has been mentioned in a comment here and in the mailing list thread you linked to, modern hard drives which are so far gone they’ve got unreplaceable bad blocks should just be discarded. (You’ve explained why you’re interested in this, but it’s worth noting for other readers.)

I don’t think there’s anything in LVM to avoid bad blocks as such; typically you’d address that below LVM, at the device layer. One way of dealing with the problem is to use device mapper: create a table giving the sector mapping required to skip all bad blocks, and build a device using that. Such a table would look something like

0 98 linear /dev/sda 0
98 98 linear /dev/sda 99

etc. (this creates a 196-sector device, using /dev/sda but skipping sector 98). You give this to dmsetup:

dmsetup create nobbsda --table mytable

and then create a PV on the resulting /dev/nobbsda device (instead of /dev/sda).

Using this method, with a little forward-planning you can even handle failing sectors in the future, in the same way as a drive’s firmware: leave some sectors at the end of the drive free (or even dotted around the drive, if you want to spread the risk), and then use them to fill holes left by failing sectors. Using the above example, if we consider sectors starting from say 200 to be spare sectors, and sector 57 becomes bad:

0 57 linear /dev/sda 0
57 1 linear /dev/sda 200
58 40 linear /dev/sda 58
98 98 linear /dev/sda 99

Creating a device-mapper table using a list of bad sectors as given by badblocks is left as an exercise for the reader.

Another solution that would work with an existing LVM setup would be use pvmove’s ability to move physical extents in order to move LVs out of bad areas. But that wouldn’t prevent those areas from being re-used whenever a new LV is created or an existing LV resized or moved.

Stephen Kitt
  • 411,918
  • 54
  • 1,065
  • 1,164
  • Doesn't the SMART-capable HDD firmware automatically reallocate bad sectors so that the OS can't use them? – Mike Waters Apr 30 '17 at 15:55
  • Take a look at this information, from a person who used to code HD firmware: https://serverfault.com/a/431051/232815 – Mike Waters Apr 30 '17 at 16:10
  • 1
    @Mike yes, modern (and even quite old) drives will re-allocate (at least, if you write over a sector which has been flagged as bad by the firmware during a previous read), but eventually drives run out of spare sectors and you then need to skip bad blocks. I generally replace drives when they start re-allocating, so I never get that far. This answer just gives a way of handling those bad blocks, it doesn’t mean I recommend it! – Stephen Kitt Apr 30 '17 at 17:17
  • +1 Thanks. I've always left a few meg unallocated at the ends of partition for that, especially on external drives I just use for redundant backups. – Mike Waters Apr 30 '17 at 19:49
  • @Mike I’m not sure that helps actually, drives have a pool of spare sectors which isn’t accessible to the operating system — sectors left unallocated in the partition table aren’t used as replacement sectors for bad sectors (AFAIK). – Stephen Kitt Apr 30 '17 at 20:19
  • Cheers. The problem I see with using `dmsetup` is that any new failures may not be able to be "removed" without screwing up the offsets in the underlying device. I'll look into mdadm raid0 which supports badblocks... – Tom Hale May 02 '17 at 03:49
  • 2
    @Tom you could also keep a bunch of unallocated sectors, and use them to remap sectors manually using `dmsetup` (which wouldn’t change the offsets of other sectors). – Stephen Kitt May 02 '17 at 07:54