1

I had a Linux NAS that I have been using for a few years. Originally I created the MDADM-Array on a Debian OS, using the command-line interface.

And at the time the OS was installed on 16GB Sandisk-USB3 and the drive was set up as RAID: LEVEL-6, with 6 identical Brand-New WD-Red 3TB Disks. Afterward, I upgrade the OS Drive with a Transcend 120GB NVMe SSD, and the OS was replaced with OpenMediaVault(OMV). And it's been running fine for over a year now. However, the NVMe SSD failed recently, and it's not writable anymore.

So, I upgraded it with a Samsung NVMe and installed OMV from scratch, however it detected the array as clean but degraded (5 disks, 1 was not added to the array), while trying to solve this by mistake I DELETED the MDADM: ARRAY from OpenMediaVault UI And now it seems the Superblock of all 5 disks are wiped clean. :-(

I had taken multiple backups of the MDADM.conf file, however now I am not sure where I have stored it. The screenshots attached show the 6th disk's superblock, which was not wiped because it was not detected by OMV: MDADM. It seems to contain the details of the original ARRAY.

While it mostly contains Linux ISOs, music, and Movies; I have some invaluable data on those hard disks, like family photos, etc. Is there any way I can re-mount the ARRAY without losing the data?

I can confirm that the disks were all in healthy condition. and I have not formatted them after creating the array. But, while I was dust-cleaning the system, I might have changed the order of disks in the SATA connections. Also I do not remember if I created the array with /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 or /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf?

I had read somewhere that, as long as I have not manually formatted those disks I can recreate the disks (with --assume_clean) and I can mount and use and mount it like before. Is that true? Should I try this command?

mdadm --zero /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

mdadm -v --create /dev/md0 --assume-clean --level=raid6 --chunk 256 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

Is this safe and non-destructive?

enter image description here

enter image description here

raj47i
  • 11
  • 1
  • Usually after a problem like this, a raw data backup should be made to avoid worsening the situation. I understand that this would probably require a new NAS array, so it depends on how important the data are to not risk losing them. – A.B Aug 26 '21 at 10:31
  • Yes, I am already setting up a different NAS(synology) for the RAW disk backups. and will do dd image of all disks before trying anything more. But while I have read the mdadm documentation, I do not deal with mdadm on a daily basis so that is why I am want expert opinion on the same before I try anything. – raj47i Aug 26 '21 at 18:14
  • Some hints on mdadm --create https://unix.stackexchange.com/a/131927/30851 -- in particular, your chunk size is not standard so you'll have to specify it. That's assuming those values are correct at all - according to `Update Time` that drive has not been part of your array since February 2021. So unless this machine wasn't powered on for a while, set this drive as `missing` when re-creating RAID. Good luck with recovery, – frostschutz Aug 28 '21 at 22:26
  • Those values are correct, without a doubt. The machine was offline for that long. The link you shared is very helpful. The major doubt I have is if the order of the disks are important while using --create for recovery. Is it? – raj47i Aug 30 '21 at 17:33

0 Answers0