2

I moved tow drives which where mounted in a raid1 to another to a new system to be mounted there. One drive was detected instantly, the other one had issues with the super blocks. After reading Kevin Deldycke's Blog, I decided to mdadm --zero-superblock /dev/sdb and .../sdeto the both drives. I could then reassemble the tow drives mdadm --create /dev/md0 --verbose --level=1 --raid-devices=2 /dev/sde /dev/sdb. As shown by cat /proc/mdstat:

$ cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sde[0] sdb[1]
      3906886464 blocks super 1.2 [2/2] [UU]
      [============>........]  resync = 62.7% (2453058112/3906886464) finish=198.9min speed=121761K/sec
      bitmap: 12/30 pages [48KB], 65536KB chunk

unused devices: <none>

But I now I can not mount /dev/md0

$ sudo mount /dev/md0 /mnt/nas/
mount: /mnt/nas: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.

I have tried:

sudo fsck -CV /dev/md0
sudo e2fsck -b 8193 /dev/md0

the later one with different sizes to no avail.

Is there a way to remount the drives or is my data lost? Thanks for your help.

My System: fedora 5.19.12-200.fc36.x86_64

Friedi
  • 21
  • 1
  • Its likely you data owerwritten by syncing. Also its totally wrong solution for your problem. Now you may try scan md0 to find fs remnants. https://www.cgsecurity.org/wiki/TestDisk_Step_By_Step – gapsf Oct 05 '22 at 17:04
  • 1
    The blog you referenced also said zeroing the superblocks "was the stupidest idea of the week". – doneal24 Oct 05 '22 at 17:19
  • For future: (re)creating something over old one destroy old one in most cases (100% for metadata) – gapsf Oct 05 '22 at 17:31
  • In your case you should be just remove "failed" drive from raid array an readd it (or new one) back so it syncing with healthy drive. – gapsf Oct 05 '22 at 17:36
  • Also read https://raid.wiki.kernel.org/index.php/Initial_Array_Creation – gapsf Oct 05 '22 at 17:44
  • I summarized generic mdadm create advice over here: https://unix.stackexchange.com/a/131927/30851 - maybe if you're lucky and it's just a wrong data offset, it can be recoverable... if it was not corrupt on the side you synced over with that create... you definitely shouldn't run fsck either (don't do anything that writes at this point... use a copy, or overlay). In case of RAID 1 you can also try your luck with regular testdisk & photorec – frostschutz Oct 05 '22 at 17:50

0 Answers0