16

The difference with and without -h should only be the human readable units, right?

Well apparently no...

$ du -s .
74216696    .
$ du -hs .
 35G    .

Or maybe I'm mistaken and the result of du -s . isn't in KB?

Creak
  • 314
  • 3
  • 12
  • 3
    Try using `du --block-size=1024 -s .`. Maybe your `BLOCK_SIZE` is set to `512` – Echoes_86 Jan 04 '17 at 18:04
  • 6
    From the (OSX) manual page: "If BLOCKSIZE is not set, and the -k option is not specified, the block counts will be displayed in 512-byte blocks" – user4556274 Jan 04 '17 at 18:06
  • Which is not super-helpful if the filesystem is actually in 4096-byte blocks. – DopeGhoti Jan 04 '17 at 18:06
  • So there is no way to have the size in bytes? I thought `-h` was just dividing by 1024 and adding some units – Creak Jan 04 '17 at 21:13
  • 2
    `echo "74216696*512" | bc` outputs , 37998948352. And yes, `-h` converts to human readable form by dividing over and over by 1024. What I got was 35.3887 , which is awfully close to what `du` reports. As for size in bytes, just use `--block-size=1`. On Linux, there's `-b` option for that, but I'm not familiar with OS X `du` – Sergiy Kolodyazhnyy Jan 05 '17 at 03:01

2 Answers2

24

du without an output format specifier gives disk usage in blocks of 512 bytes, not kilobytes. You can use the option -k to display in kilobytes instead. On OS X (or macOS, or MacOS, or Macos; whichever you like), you can customize the default unit by setting the environment variable BLOCKSIZE (this affects other commands as well).

Gilles 'SO- stop being evil'
  • 807,993
  • 194
  • 1,674
  • 2,175
DopeGhoti
  • 73,792
  • 8
  • 97
  • 133
  • 1
    Didn't know that... What's the interest in having the number of blocks? I have never seen one person using _blocks_ when talking about sizes on disk... – Creak Jan 04 '17 at 21:15
  • 2
    Blocks are the atomic unit of filesystems. Any file will consume a whole-number of blocks on the disk. A block may only be partially filled with actual data, but the entire block is allocated to the file. Most folks' day-to-day usage doesn't care about blocks other than in a percentage-used-versus-free sense. But low level utilities (e. g. `fdisk`, `df`, and `du`) work in blocks unless directed otherwise because that is the unit by which they count internally. – DopeGhoti Jan 04 '17 at 23:17
  • 1
    @Creak Actually, that part of the answer was wrong. The unit is not the block size of the filesystem. For the purposes of classic Unix commands such as `du`, “block” means 512 bytes. See [file block size - difference between stat and ls](http://unix.stackexchange.com/questions/28780/file-block-size-difference-between-stat-and-ls/28815#28815), [Difference between block size and cluster size](http://unix.stackexchange.com/questions/14409/difference-between-block-size-and-cluster-size/14411#14411) – Gilles 'SO- stop being evil' Jan 04 '17 at 23:44
  • 3
    *"Any file will consume a whole number of blocks on the disk."* Well, generally yes—unless your filesystem uses [**tail packing**](https://en.wikipedia.org/wiki/Block_suballocation). But yes, blocks are the basic unit of the filesystem (although the actual block size doesn't necessarily align to the block size used by `du`.) :) – Wildcard Jan 04 '17 at 23:58
3

The problem is that du returns the size in number of blocks of 512 bytes.

In order to have the size in KB, you can use the -k option that use 1024-byte blocks instead:

$ du -ks .                            
43351596    .
$ du -khs .
 41G    .
Gilles 'SO- stop being evil'
  • 807,993
  • 194
  • 1,674
  • 2,175
Creak
  • 314
  • 3
  • 12