0

I have 2 mdadm arrays. Long time ago, when there was only one raid, i wrote mdadm.conf and it looked like this:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=55dc183e:d7199ced:929f5f4a:123c24a3

Since there was no 2nd raid, i thought adding it to it would be a good idea. so I ran command mdadm --detail --scan >> /etc/mdadm.conf

But now, there are 2 entries for first raid

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=55dc183e:d7199ced:929f5f4a:123c24a3
ARRAY /dev/md/pv00 metadata=1.2 name=server.local:pv00 UUID=55dc183e:d7199ced:929f5f4a:123c24a3
ARRAY /dev/md/25 metadata=1.2 spares=2 name=server.local:25 UUID=a883dfb5:1a8f32ce:fd20e5d8:156a01ff
  • 1st: Should I remove old entry and leave only new one? Which one is better?
  • 2nd: Why is there a difference between old and new entry? Old one has level=raid1 num-devices=2 but new one has only metadata=1.2 instead.
    Edit: Parially found answer
  • 3rd: Found information, that raid will not start without it. However, entry in fstab mounting this array seems to automatically start it. So it's needed or not?

Also found some people writing to upate mdadm.conf with mdadm --verbose --detail --scan > /etc/mdadm.conf source. Is it proper? It also outputs drive locations like:

ARRAY /dev/md/pv00 level=raid1 num-devices=2 metadata=1.2 name=server.local:pv00 UUID=55dc183e:d7199ced:929f5f4a:123c24a3
   devices=/dev/sdi2,/dev/sdj1
ARRAY /dev/md/25 level=raid6 num-devices=6 metadata=1.2 spares=2 name=server.local:25 UUID=a883dfb5:1a8f32ce:fd20e5d8:156a01ff
   devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sdf1,/dev/sdg1

Is it proper syntax? As far, as I know. Drive /dev/sd* may change. So is it safe to add devices to it? I recently had to replace SATA cables on system, and letters changed, as I did not put attention to connect them to same port.

Gacek
  • 207
  • 1
  • 2
  • 7

2 Answers2

2

The mdadm man page puts it this way (emphasis mine):

echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf`
mdadm --detail --scan >> mdadm.conf

This will create a prototype config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives. This file should be reviewed before being used as it may contain unwanted detail.

Now, to your question,

As far, as I know. Drive /dev/sd* may change. So is it safe to add devices to it?

It's not just that device names may change. Everything else may change, too!

mdadm supports growing arrays. So you can add more devices which changes num-devices=2, or change the raid level=raid1. Drives might fail which causes spares to automatically take over, which changes spares=2 as there will be fewer spares still available to your array. Even the name= is not protected as there are various issues around the way mdadm treats host and array names. The metadata version has changed in the past, if you used metadata=0.90 then mdadm could have updated it to metadata=1.0. If there is new metadata in the future, it might be possible to update it again.

The one thing that is constant for an array throughout its lifetime is the UUID=a883dfb5:1a8f32ce:fd20e5d8:156a01ff, hence my recommendation in the question you linked:

For each array, use just the UUID, nothing else.

The only purpose for all these variables is to identify the correct array, and the UUID does that perfectly fine by itself, nothing else needed. So just remove the other stuff.

Of course, if you really want, you can also change the UUID. But it's a much more deliberate action than the other changes that occur during normal operation of the array.

mdadm --detail --scan is just a starting point but should not be used literally as mdadm.conf. Like the manpage states, it's just too detailed, and too much detail can cause the assembly to fail.

frostschutz
  • 47,228
  • 5
  • 112
  • 159
0

You may want to update your boot settings after editing the mdadm.conf file, and then reboot to verify:

$ sudo vi /etc/mdadm/mdadm.conf

$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.19.0-35-generic
I: The initramfs will attempt to resume from /dev/sdg5
I: (UUID=9fdd3772-4599-40b0-89e3-ec79fd4787be)
I: Set the RESUME variable to override this.

$ sudo reboot

Please read: https://raid.wiki.kernel.org/index.php/Tweaking,_tuning_and_troubleshooting

wryan
  • 111
  • 2
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Mar 11 '23 at 23:13