11

Prior to running the dd command, the command lsblk returned the output below:

NAME              MAJ:MIN  RM   SIZE    RO TYPE  MOUNTPOINT
sda               8:0       0    931.5G  0  disk  

The command dd if=/dev/urandom of=/dev/sda conv=fsync status=progress is run. The device however loses power and shuts down. When power is reinstated, the command lsblk returns the following output:

NAME              MAJ:MIN     RM   SIZE    RO TYPE  MOUNTPOINT
    sda           8:0          0   931.5G  0  disk 
      sda2        8:2          0   487.5G  0  disk
Boann
  • 622
  • 1
  • 6
  • 14
Motivated
  • 309
  • 3
  • 11
  • @RuiFRibeiro - Thanks for the analogy however it isn't clear as to why `dd` would result in partitions especially if the command is intended to wipe disks? – Motivated Jan 06 '19 at 15:36
  • 1
    Coincidence: it is very un-likely to be related to the power cut. You write random data to the device. Some of this random data went to the first few blocks, this is where the partition tables live. You probably ended up defining a partition. – ctrl-alt-delor Jan 06 '19 at 16:28
  • can you post the result of `file /dev/sda*` and `sudo fdisk -l /dev/sda*`? – phuclv Jan 07 '19 at 01:48
  • @phuclv - As i have started the process, will the output still be valuable? – Motivated Jan 07 '19 at 16:02
  • it'll give you detail information about partitions. The output from `lsblk` is useless – phuclv Jan 07 '19 at 16:05
  • @phuclv - Since the process has started, it doesn't give any meaningful information about the partitions at the moment. If there is another outage, i'll certainly post the output. Out of curiosity, why do you say that lsblk is useless? – Motivated Jan 07 '19 at 16:07
  • `lsblk` shows what block devices are there. `fdisk` gives the information about the partition from what it reads from the MBR and VBR, which is why those block devices were created and `lsblk` displayed them. I don't get what "process" you mean, since you only have a `dd` command and the above commands will give explain what were written to the MBR by `dd`. If you want even more detail just dump the MBR out – phuclv Jan 07 '19 at 16:11
  • @phuclv - The process i'm referring to is `dd if=/dev/urandom of=/dev/sda conv=fsync status=progress`. In running the `fdisk -l` command, the output that is currently displayed is limited to `Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes` – Motivated Jan 07 '19 at 16:19
  • 1
    @Motivated Note that `dd` purpose is not per-se to wipe disks. Writing random data to a disk can produce random results. – jjmontes Jan 07 '19 at 18:38
  • @jjmontes - That's an interesting perspective since resources such as those from Arch Linux (https://wiki.archlinux.org/index.php/Securely_wipe_disk) reference it. Is there an authoritative source that describes this in further detail? – Motivated Jan 08 '19 at 05:12

4 Answers4

20

Several possibilities:

  • Linux supports a lot of different partition table types, some of which use very few magic bytes, and then it's easy to mis-identify random data (*) [so it's possible to randomly generate a somewhat "valid" partition table].

  • Some partition table types have backups at the end of the disk as well (most notably GPT) and that could be picked up on if the start of the drive was replaced with random garbage.

  • The device doesn't work properly and it was disconnected before it finished writing the data, or keeps returning old data, so partition table survives. Sometimes this happens with USB sticks.

  • ...

(*) Make 1000 files with random data in them and see what comes out:

$ truncate -s 8K {0001..1000}
$ shred -n 1 {0001..1000}
$ file -s {0001..1000} | grep -v data
0099: COM executable for DOS
0300: DOS executable (COM)
0302: TTComp archive, binary, 4K dictionary
0389: Dyalog APL component file 64-bit level 1 journaled checksummed version 192.192
0407: COM executable for DOS
0475: PGP\011Secret Sub-key -
....

The goal of random-shredding a drive is to make old data vanish for good. There is no promise the drive will appear empty, unused, in pristine condition afterwards.

It's common to follow up with a zero wipe to achieve that. If you are using LVM, it's normal for LVM to zero out the first few sectors of any LV you create so old data won't interfere.

There's also a dedicated utility (wipefs) to get rid of old magic byte signatures which you can use to get rid of filesystem and partition table metadata.

frostschutz
  • 47,228
  • 5
  • 112
  • 159
  • The devices had been previously erased using the ATA Secure Erase command. I assume that this would remove data such that 1. it is irrecoverable 2. no partition information survives. If this is true, do you mean to say that when running the `dd` command, the generation of random data when interrupted can result in data that looks like partition tables? Also these are SATA hard disks (non-SSD). – Motivated Jan 06 '19 at 17:43
  • 5
    Random data can look like anything. That's what it means to be random. Are you familiar with the Infinite Monkeys Theorem? It states that if a large enough amount of monkeys randomly type on typewriters for a long enough time, one of them will at some point or another produce the complete works of Shakespeare. An MBR partition table is *really* small (only 64 bytes), it has no checksums or verification, and a very dense format. It is highly likely that a random string of 64 bytes will produce a valid partition table. Other partition table formats are similarly simple. – Jörg W Mittag Jan 06 '19 at 20:30
  • Yes the partition table is only 64 bytes, (at the end) the partition type is only 1 byte, and the entries need to be lawful or sequential. So zeroing the first cluster/sector/512 byes on MBR is sensible. You also do not want unpredictable boot behaviour, less likely, but still a risk. – mckenzm Jan 07 '19 at 01:57
18

As seen here, the MBR (Master Boot Record) is relatively simple; https://en.wikipedia.org/wiki/Master_boot_record.

When you use /dev/urandom you can always create something that looks like a partition table. The solution is to fill the partition table regions with zero and use dev/urandom for the rest.

Linux also supports other additional disk formats that can also potentially be triggered, causing "invalid" partitions to show up when filling with random data.

Adam Waldenberg
  • 280
  • 1
  • 6
13

The thing that defines a collection of 512 bytes as being a Master Boot Record is the presence of the values 0x55 0xAA at the end. There's a 1-in-65,536 chance of /dev/urandom producing such a value: not too likely, but similarly improbable things happen all the time.

(Some other partition tables, such as the Apple Partition Map, have similarly short signatures. It's possible you've generated one of them instead.)

Mark
  • 4,054
  • 3
  • 26
  • 39
3

Was such partition present some time before on that disk? If the disk uses GPT, maybe the Secondary GPT Header got restored and it still had the old partition table.

https://en.wikipedia.org/wiki/GUID_Partition_Table

Jakub Fojtik
  • 130
  • 5