48

What happens if the limit of 4 billion files was exceeded in an ext4 partition, with a transfer of 5 billion files for example?

peterh
  • 9,488
  • 16
  • 59
  • 88
Bensuperpc
  • 601
  • 5
  • 9

2 Answers2

80

Presumably, you'll be seeing some flavor of "No space left on device" error:

# truncate -s 100M foobar.img
# mkfs.ext4 foobar.img
Creating filesystem with 102400 1k blocks and 25688 inodes
---> number of inodes determined at mkfs time ^^^^^
# mount -o loop foobar.img loop/
# touch loop/{1..25688}
touch: cannot touch 'loop/25678': No space left on device
touch: cannot touch 'loop/25679': No space left on device
touch: cannot touch 'loop/25680': No space left on device

And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left.

# df -h loop/
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       93M  2.1M   84M   3% /dev/shm/loop
# df -i loop/
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/loop0      25688 25688     0  100% /dev/shm/loop

In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio (mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf).

frostschutz
  • 47,228
  • 5
  • 112
  • 159
  • 1
    Thank you for your answer, sometime i'm worried, i have over 400 millions files in my main partition (RAID 50), i have many git repository, it was for security if that were to happen – Bensuperpc Jul 02 '19 at 07:47
  • 5
    @ensuperpc: If many of the files aren't regularly used - just there for backup purposes - you might consider putting each project in its own tar file. That reduces the number of files considerably, and also the space occupied if you use a compression option. – jamesqf Jul 02 '19 at 17:02
  • 27
    @jamesqf If you haven't already, try running `git repack` in each git repository to combine all of the separate objects into a pack file. – user253751 Jul 02 '19 at 22:35
  • @immibis: Same principle, I think, but tar works with every kind of file, not just those within git. – jamesqf Jul 03 '19 at 05:11
  • 13
    +1 Since you are just using `touch`, no fancy `echo`, you also show an important point and an often-made misconception: It is possible to fill up a disk with empty files. – rexkogitans Jul 03 '19 at 06:34
  • 6
    @jamesqf `git repack` doesn't loose any functionality, it is still functionally the same git repo, `tar` makes it unreadable for many programs expecting a project or a git repository – Ferrybig Jul 03 '19 at 12:13
  • @Ferrybig: Which would be advantageous if the git repository is being used, even infrequently. If it's just kept for archival/backup purposes, not so much. And again, probably not much use for collections of files that aren't git repositories. – jamesqf Jul 03 '19 at 15:56
  • @frostschutz hi, I couuldn't reach your website, I have sent you an e-mail seeking [non-financial] teaching aid from `[email protected]`. I hope you received it. – RinkyPinku Jul 16 '19 at 15:11
52

Once the limit is reached, subsequent attempts to create files will fail with ENOSPC, indicating that the target file system has no room for new files.

In the scenario you describe, this will typically result in the transfer aborting once the limit is reached.

Stephen Kitt
  • 411,918
  • 54
  • 1,065
  • 1,164