1

At first, there was a RAID0 array, to which I've added a new disk by issuing

mdadm --add 

but when I've done that, the drive started to fail because of the power issues and the array has not started since then. Just after mdadm started to do its thing to move RAID0 to RAID4 type array.

I've tried to recreate the array but I've failed, because it now sees partitions that there were not created before. There was only one partition of the size of 2TB created on /dev/md/0 array using two 1TB drives.

sdb         8:16   0 931,5G  0 disk  
└─md0       9:0    0 931,4G  0 raid4 
  ├─md0p1 259:0    0  27,3G  0 md    
  └─md0p2 259:1    0   421G  0 md    
sdc         8:32   0 931,5G  0 disk  
└─md0       9:0    0 931,4G  0 raid4 
  ├─md0p1 259:0    0  27,3G  0 md    
  └─md0p2 259:1    0   421G  0 md  

Here's also --examine output (just after the array failed, but before I've tried to recreate the array)

muszy@nas:~$ sudo mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x44
     Array UUID : 05b3504e:d335720b:a0d9c0ee:dd2e7a8f
           Name : nas:0  (local to host nas)
  Creation Time : Wed Oct  2 12:49:47 2019
     Raid Level : raid4
   Raid Devices : 4

 Avail Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
     Array Size : 2929890816 (2794.16 GiB 3000.21 GB)
    Data Offset : 264192 sectors
     New Offset : 261120 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : f8ce2e05:671f6377:767d7d05:4a0f7727

  Reshape pos'n : 19646976 (18.74 GiB 20.12 GB)
  Delta Devices : 1 (3->4)

    Update Time : Fri Oct 11 12:28:20 2019
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : 509e6f70 - correct
         Events : 43

     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
muszy@nas:~$ sudo mdadm --examine /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x46
     Array UUID : 05b3504e:d335720b:a0d9c0ee:dd2e7a8f
           Name : nas:0  (local to host nas)
  Creation Time : Wed Oct  2 12:49:47 2019
     Raid Level : raid4
   Raid Devices : 4

 Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
     Array Size : 2929890816 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
    Data Offset : 264192 sectors
     New Offset : 261120 sectors
   Super Offset : 8 sectors
Recovery Offset : 13097984 sectors
          State : clean
    Device UUID : 3fc0cbbf:068b4f7e:7359304e:b26ca865

  Reshape pos'n : 19646976 (18.74 GiB 20.12 GB)
  Delta Devices : 1 (3->4)

    Update Time : Fri Oct 11 12:28:09 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : 99d7e462 - correct
         Events : 41

     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
muszy@nas:~$ sudo mdadm --examine /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x44
     Array UUID : 05b3504e:d335720b:a0d9c0ee:dd2e7a8f
           Name : nas:0  (local to host nas)
  Creation Time : Wed Oct  2 12:49:47 2019
     Raid Level : raid4
   Raid Devices : 4

 Avail Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
     Array Size : 2929890816 (2794.16 GiB 3000.21 GB)
    Data Offset : 264192 sectors
     New Offset : 261120 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 9e985730:eddba18f:6f636c8a:79ecdfc2

  Reshape pos'n : 19646976 (18.74 GiB 20.12 GB)
  Delta Devices : 1 (3->4)

    Update Time : Fri Oct 11 12:28:20 2019
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : b45db8c4 - correct
         Events : 43

     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)

Is there any chance I could recover the array with the information I've provided?

Thanks to frostschutz help, I've created the overlay and managed to start two arrays:

Array 1 (The old one "RAID0"):

parallel 'test -e /dev/loop{#} || mknod -m 660 /dev/looparray1{#} b 7 {#}' ::: /dev/sda /dev/sdb
parallel truncate -s1000G array1-overlay-{/} ::: /dev/sda /dev/sdb
parallel 'size=$(blockdev --getsize {}); loop=$(losetup -f --show -- array1-overlay-{/}); echo 0 $size snapshot {} $loop P 8 | dmsetup create array1{/}' ::: /dev/sda /dev/sdb
mdadm --create /dev/md/array1 --assume-clean --level=0 --chunk=512K --data-offset=264192s --raid-devices=2 /dev/mapper/array1sda /dev/mapper/array1sdb

Linux sees a "2TB unknown" partition

Array 2 (The new one "RAID4"):

parallel 'test -e /dev/loop{#} || mknod -m 660 /dev/looparray2{#} b 7 {#}' ::: /dev/sda /dev/sdb /dev/sdd
parallel truncate -s1000G array2-overlay-{/} ::: /dev/sda /dev/sdb /dev/sdd
parallel 'size=$(blockdev --getsize {}); loop=$(losetup -f --show -- array2-overlay-{/}); echo 0 $size snapshot {} $loop P 8 | dmsetup create array2{/}' ::: /dev/sda /dev/sdb /dev/sdd
mdadm --create /dev/md/array2 --assume-clean --level=0 --chunk=512K --data-offset=261120s --raid-devices=3 /dev/mapper/array2sda /dev/mapper/array2sdb /dev/mapper/array2sdd

Linux sees an EXT4 partition with size of 3TB, but now it's not mountable, TestDisk sees the directory structure (marked as red).

When I'm trying to join them together:

blockdev --getsize /dev/md/array1
3906521088

blockdev --getsize /dev/md/array2
5859790848

echo -e '0 3906521088 linear /dev/md/array1 0'\\n'3906521088 1953269760 linear /dev/md/array2 3906521088' | dmsetup create rec-array

it creates a 3TB drive with nothing useful.

Muszyn8
  • 11
  • 3
  • It's in mid-reshape which complicates things. You have to [re-create](https://unix.stackexchange.com/a/131927/30851) both states using [two sets of overlays](https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file), then use dm-linear mapping at the reshape pos'n. – frostschutz Oct 12 '19 at 09:59
  • I've recreated RADI4 array basing on overlays. The disk space is still partitioned, and that's probably because I haven't used 'dmsetup create linear' which I have absolutely no idea how to start with in this case. Could you elaborate more? Reshape pos'n : 19646976 (18.74 GiB 20.12 GB) – Muszyn8 Oct 12 '19 at 12:26
  • The linear dmsetup is to stitch two block device segments together. You have two segments because in that reshape, the data offset changed so you have a data gap around the reshape position in your recreated array. That's assuming the recreate itself was done correctly (correct layout, offset, and drive order). It's difficult to pull this trick off without direct access to the data, you have to verify the offset you found is actually correct by looking at the data at the point of interest – frostschutz Oct 12 '19 at 14:09
  • Your re-create is wrong, a 3 disk raid0 can't have a missing, and you need two re-created instances (one for the old and one for the new layout). Also verify the offset with another --examine, mdadm sometimes uses weird units and it doesn't come out as intended. – frostschutz Oct 12 '19 at 14:14
  • So I should: 1. Create 4 overlays (2 for each drive) 2. Create a RAID0 array between 2 overlays (drive A, B) and data-offset 264192 3. Create a RAID4 array between 2 overlays (drive A, B) and data-offset 261120 4. Map it somehow with dmsetup to create another device which should just mount correctly as EXT4 partition? I've tried to simply recreate RAID0 before and some files were readable by using Photorec, but Testdisk could not recover the EXT4 partition. I've even tried to use R-Recovery for Linux, but it is just finding RAW files without directory structures. – Muszyn8 Oct 14 '19 at 15:50
  • Maybe I missed something, but ... you have 3 drives, not 2, yes? You were growing 3 drive raid0 to 4 drive raid4. You need the 3 raid0 drives to recover. If you lost a raid0 drive you can only recover files smaller than chunk size. – frostschutz Oct 14 '19 at 15:59
  • I had RAID0 with 2 x 1TB HDD, then I've added a third one by issuing command "mdadm --add /dev/sdX" (without any other arguments). Then mdadm started to "raise" RAID0 (2 x 1TB HDD) to RAID4 (3 x 1TB HDD) The process was slowly progressing and was stuck, because of the power failure of the third drive. I couldn't stop the array, because the "sdX" was in use. After the reboot, I could not start the array, because mdadm stated that "not enough to start the array". Then I've tried to recreate the array but I did not use the "--assume-clean" switch which is really bad, I know that now. – Muszyn8 Oct 14 '19 at 16:16
  • Ahhh, I see. The raid4 is done by mdadm internally. So a 2->3 raid0 grow technically is a 3->4 raid4 grow. Hence the confusion. But that still means your data is on 3 disks now, up to reshape pos'n and you need all three for the recovery. – frostschutz Oct 14 '19 at 16:43
  • Fortunately, the third disk is still operational after that crash, but I can't manage to create the same array that was before.When I try to create array like that: mdadm --create /dev/md/0 --assume-clean --level=4 --chunk=512K --data-offset=264192 --raid-devices=3 /dev/sda /dev/sdb /dev/sde All I can see is md/0 2TB of size which is partitioned in weird way like: 1.2TB / 0.5TB / 500 MB (I have to try it again to see the correct partition sizes if it matters) – Muszyn8 Oct 14 '19 at 18:32
  • These parameters are still all wrong. New array, level 0, offset 261120**s** (without s mdadm does something stupid), 3 drives sdd sdb sdc (examine output Active device X). Old array, level 0, offset 264192**s**, 2 drives sdb sdc. If you use level 4, it's +1 missing parity disk for each. This is just me guessing since I have no way to verify these and I can't replicate your situation right now. Assuming these are correct then you stitch new array (0-pos) and old array (pos-end) together in a dm-linear. Good luck...? – frostschutz Oct 14 '19 at 20:08
  • You mean, I should join the old array (which is smaller) with the new array (which is bigger) to create one 3TB volume consisting with: 2TB of data of the first created array and last 1TB of data of the second array (3TB)? – Muszyn8 Oct 16 '19 at 19:09
  • As I see, it doesn't matter. I've tried to do it each way possible, without any further results. It looks like I should join the arrays in a way that they would "complete each other" which is just impossible due to the nature of the problem. Thank you for your help, only because of your experience it was possible to recreate array, from which I can try to recover files using TestDisk. Thanks again frostschutz! – Muszyn8 Oct 16 '19 at 19:48

0 Answers0