0

I'm able to fill 1MB file with specific character like this:

> tr '\0' '#' </dev/zero | dd of=1MB.bin bs=1k count=1024
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139671 s, 75.1 MB/s

However if I use larger block size it ends-up with 64kB file:

> tr '\0' '#' </dev/zero | dd of=1MB.bin bs=1M count=1
0+1 records in
0+1 records out
65536 bytes (66 kB, 64 KiB) copied, 0.000240126 s, 273 MB/s

Can anyone explain this behavior? Is it a buffering problem? I see it is not connected with /dev/zero special file. The same result with regular file.

ardabro
  • 149
  • 6
  • 2
    I've marked this as a duplicate because [this answer](https://unix.stackexchange.com/a/121888/4989) directly addresses the problem. The tl;dr here is add `iflag=fullblock` to your `dd` invocation. – larsks Jul 05 '22 at 21:28
  • Also the other one linked from there [When is dd suitable for copying data? (or, when are read() and write() partial)](https://unix.stackexchange.com/q/17295/170373) – ilkkachu Jul 05 '22 at 21:46
  • instead of meddling with weird options to `dd`, the other alternative is to run `... | head -c1M > 1MB.bin` instead. (Or `head -c$((1024*1024))` if yours doesn't take size suffixes.) – ilkkachu Jul 05 '22 at 21:48

0 Answers0