I have Ubuntu 16.04 host with 3 Ubuntu 17.10 guests in KVM (Virtual Machine Manager 1.3.2).
I export several block devices from 2 guests to the other guest (let's call it frontend) via iSCSI portal created by targetcli util. Having imported them, I heavily use multipath to find same "physical" disks and md to create RAID 10 (say, mdadm --create --quiet --metadata=1.2 /dev/md1 --level=1 --raid-devices=2 /dev/dm-10 /dev/dm-1). Then I need to wipe this information out.
Here comes the problem: it does not wipe. I go through usual steps (say, to clean md1):
1) mdadm -S /dev/md1
2) mdadm --zero-superblock /dev/md1
3) mdadm --zero-superblock /dev/mapper/md1
Everything seems fine until I remove imported disks and re-import them some time later: they stochastically appear grouped in RAID. Sometimes RAID group names are far from originally created (e.g. md126 and md127, while I only created md1, md2, ... md12). These zombie RAIDs can be buried with mdadm -S, but they appear again the next time block devices are imported.
Why does --zero-superblock fail to do its work?
UPD: As @roaima mentioned, commands 2 and 3 and alike really return errors:
Couldn't open /dev/md1 for write - not zeroing
Couldn't open /dev/mapper for write - not zeroing
Couldn't open /dev/mapper/ for write - not zeroing
That is pretty much the same answer as if there are no such devices - any rubbish as argument will return the same error.
UPD2: I used # cat /proc/mdstat, which told me more about raids:
md124 : inactive vdg[0](S)
5238784 blocks super 1.2
md127 : inactive vdb[1](S)
5238784 blocks super 1.2
However, I still can not wipe neither /dev/vdg (Couldn't open /dev/vdg for write - not zeroing) nor /dev/md124 (Unrecognised md component device - /dev/md124).