9

PROBLEM

I create a RAID 1 configuration, I name it /dev/md1, but when I reboot, the name always changes to /dev/md127

Adrián Jaramillo
  • 359
  • 3
  • 5
  • 17

4 Answers4

10

SOLUTION

I couldn't find a solution with an already created RAID 1 configuration, so backup your data, because for this solution I'll give you'll need to delete your RAID 1 first. Actually, I just deleted the virtual machine I was working with and created a new one.
So this it's going to work with Debian 10, and with a clean machine

Create a new clean raid1 configuration

In my case I have 3 virtual disks, so I run the command like this (remember that first you need to make partitions of the same size and type Linux raid autodetect)

sudo mdadm --create /dev/md1 --level=mirror --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

Edit mdadm.conf

Go to the file /etc/mdadm/mdadm.conf, delete all content, and replace it with this instead:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

Add a reference to your array inside the previous file

Login as root and do this

sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Now the contents of this file are

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md1 metadata=1.2 name=buster:1 UUID=1279dbd2:d0acbb4f:0b34e3e1:3de1b3af

ARRAY /dev/md1 metadata=1.2 name=buster:1 UUID=1279dbd2:d0acbb4f:0b34e3e1:3de1b3af (this was the new line added referencing the array)

If the command has added something before the ARRAY line, delete it.

Just in case

Run sudo update-initramfs -u

Permanently mount a partition of your raid

Mount it it's optional, but I think that'll want to use the storage of your RAID1.

  1. Get the UUID of your partition with sudo blkid
  2. Edit /etc/fstab with this new line of code UUID=d367f4ed-2b37-4967-971a-13d9129fff4f /home/vagrant/raid1 ext3 defaults 0 2 Replace the UUID with the one you got of your partition, and the filesystem with the one you have in your partition

The contents of my /etc/fstab now are

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/vda1 during installation
UUID=b9ffc3d1-86b2-4a2c-a8be-f2b2f4aa4cb5 /               ext4    errors=remount-ro 0       1
# swap was on /dev/vda5 during installation
UUID=f8f6d279-1b63-4310-a668-cb468c9091d8 none            swap    sw              0       0
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
UUID=d367f4ed-2b37-4967-971a-13d9129fff4f /home/vagrant/raid1 ext3 defaults  0      2

UUID=d367f4ed-2b37-4967-971a-13d9129fff4f /home/vagrant/raid1 ext3 defaults 0 2 (here you can see clearly the line I added)

NOW YOU CAN REBOOT

The name now is not going to change.
If I run sudo fdisk -l I get this (I'll show just the relevant information)

Disk /dev/md1: 1022 MiB, 1071644672 bytes, 2093056 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x37b2765e

Device     Boot Start     End Sectors  Size Id Type
/dev/md1p1       2048 2093055 2091008 1021M 83 Linux

If I run df -Th I get

Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  227M     0  227M   0% /dev
tmpfs          tmpfs      49M  3.4M   46M   7% /run
/dev/sda1      ext4       19G  4.1G   14G  24% /
tmpfs          tmpfs     242M     0  242M   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     242M     0  242M   0% /sys/fs/cgroup
/dev/md1p1     ext3      989M  1.3M  937M   1% /home/vagrant/raid1
tmpfs          tmpfs      49M     0   49M   0% /run/user/1000

You see that is also mounted. And finally, If I run cat /proc/mdstat, I get

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [3/3] [UUU]

unused devices: <none>

The raid1 is working, with sdb1, sdc1 and sdd1.
Now this is COMPLETE! You can reboot and your raid name will always remain.

All sources I used so I could found the solution it worked for me

https://superuser.com/questions/287462/how-can-i-make-mdadm-auto-assemble-raid-after-each-boot
https://ubuntuforums.org/showthread.php?t=2265120
https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array
https://serverfault.com/questions/267480/how-do-i-rename-an-mdadm-raid-array
https://bugzilla.redhat.com/show_bug.cgi?id=606481

Some are more relevant for this solution than others, but ALL OF THEM helped me reach this solution.
Wow, you have read a lot isn't it? Now you can relax if your problem was solved, hope this helped you out! See you!

Adrián Jaramillo
  • 359
  • 3
  • 5
  • 17
4

For Debian 11 systems, all that should be required is this:

  1. mdadm --detail --scan /dev/md127 >> /etc/mdadm/mdadm.conf

  2. vim /etc/mdadm/mdadm.conf, edit the appended line to look like this:

    ARRAY /dev/md0 metadata=1.2 UUID=XXXXXXXX:XXXXXXXX:XXXXXXXX:XXXXXXXX

In other words, remove the name part, and set the device to /dev/md0.

  1. update-initramfs -u

  2. Reboot

user3728501
  • 867
  • 13
  • 33
2

This can happen if mdadm detects the array is a "foreign" array, rather than a "local" one. Foreign arrays will be assigned device nodes starting from md127 (and working down). Local arrays are assigned device nodes starting from md0 (and working up).

One way mdadm determines whether an array is local or foreign is by comparing the "homehost" name recorded on the array with the current hostname. I've found that it's not uncommon, during boot, for the hostname to not yet be properly configured by the time mdadm runs (because the init system hasn't yet gotten to the init script that sets the hostname based on the contents of /etc/hostname). So when mdadm queries for the hostname, it gets "localhost" or "(none)" or whatever default hostname was compiled into the kernel. That default name doesn't match the "homehost" recorded on the array, so mdadm considers it a foreign array.

This can be fixed by ensuring that the machine's hostname gets set before mdadm is run to assemble the array.

Dan Moulding
  • 121
  • 3
  • How may I ensure that the hostname is set before mdadm assembles the array? – jay.lee Jul 21 '22 at 19:12
  • 1
    @jay.lee Unfortunately, there is no single way to do that, which will work on all systems. It depends on the distro and/or init system (and their versions), as well as whether an initramfs is in use or not, and if so, whether mdadm does the assembly from within the initramfs. A new "hostname" kernel parameter has been introduced in Linux to make it possible to do this in a consistent way, but it probably won't be available until Linux kernel version 5.20 (which won't be released until later this year). – Dan Moulding Jul 22 '22 at 23:16
  • Rats. Thanks for the info. Guess I'm stuck using a custom HOMEHOST for the time being – jay.lee Jul 24 '22 at 00:17
1

I removed the ARRAY in "/etc/mdadm/mdadm.conf" run the next command

sudo update-initramfs -u

then rebooted the system. Than added the ARRAY with

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

opened the /etc/mdadm/mdadm.conf and changed it to

ARRAY /dev/md/0 metadata=1.2 name=raspberrypi-nas:0 UUID=86275e90:a19b3601:fc78b0d8:57f9c56a
ARRAY /dev/md/1 metadata=1.2 name=raspberrypi:1 UUID=e8f0c48c:448321f6:1db0f830:ea39bc42

like i wanted too, run

sudo update-initramfs -u

and rebooted the system. Now everything worked as aspected.

roaima
  • 107,089
  • 14
  • 139
  • 261
H.J. Vos
  • 11
  • 1