60

Recently I've been digging up information about processes in GNU/Linux and I met the infamous fork bomb :

:(){ : | :& }; :

Theoretically, it is supposed to duplicate itself infinitely until the system runs out of resources...

However, I've tried testing both on a CLI Debian and a GUI Mint distro, and it doesn't seem to impact much the system. Yes there are tons of processes that are created, and after a while I read in console messages like :

bash: fork: Resource temporarily unavailable

bash: fork: retry: No child processes

But after some time, all the processes just get killed and everything goes back to normal. I've read that the ulimit set a maximum amount of process per user, but I can't seem to be able to raise it really far.

What are the system protections against a fork-bomb? Why doesn't it replicate itself until everything freezes or at least lags a lot? Is there a way to really crash a system with a fork bomb?

Plancton
  • 687
  • 1
  • 5
  • 7
  • 5
    Note that you won’t “crash” your system using a fork bomb... as you said, you’ll exhaust resources and be unable to spawn new processes but the system shouldn’t _crash_ – Josh Sep 19 '18 at 14:00
  • 2
    What happens if you run `:(){ :& :; }; :` instead? Do they also all end up getting killed eventually? What about `:(){ while :& do :& done; }; :`? – mtraceur Sep 19 '18 at 19:22
  • `ulimit -u unlimited` would be the command line method to set `max user processes` to `unlimited` however that would be overridden i believe by the `hard` limit in `/etc/security/limits.conf` – ron Oct 05 '21 at 20:30
  • so if you were to edit `/etc/security/limits.conf` and set both the hard and soft limits to `unlimited` for `nproc` I believe that would undo the protection mechanism and allow your fork bomb to blow up (i.e. *really crash*) your system. – ron Oct 05 '21 at 20:32

3 Answers3

86

You probably have a Linux distro that uses systemd.

Systemd creates a cgroup for each user, and all processes of a user belong to the same cgroup.

Cgroups is a Linux mechanism to set limits on system resources like max number of processes, CPU cycles, RAM usage, etc. This is a different, more modern, layer of resource limiting than ulimit (which uses the getrlimit() syscall).

If you run systemctl status user-<uid>.slice (which represents the user's cgroup), you can see the current and maximum number of tasks (processes and threads) that is allowed within that cgroup.

$ systemctl status user-$UID.slice
● user-22001.slice - User Slice of UID 22001
   Loaded: loaded
  Drop-In: /usr/lib/systemd/system/user-.slice.d
           └─10-defaults.conf
   Active: active since Mon 2018-09-10 17:36:35 EEST; 1 weeks 3 days ago
    Tasks: 17 (limit: 10267)
   Memory: 616.7M

By default, the maximum number of tasks that systemd will allow for each user is 33% of the "system-wide maximum" (sysctl kernel.threads-max); this usually amounts to ~10,000 tasks. If you want to change this limit:

  • In systemd v239 and later, the user default is set via TasksMax= in:

    /usr/lib/systemd/system/user-.slice.d/10-defaults.conf
    

    To adjust the limit for a specific user (which will be applied immediately as well as stored in /etc/systemd/system.control), run:

    systemctl [--runtime] set-property user-<uid>.slice TasksMax=<value>
    

    The usual mechanisms of overriding a unit's settings (such as systemctl edit) can be used here as well, but they will require a reboot. For example, if you want to change the limit for every user, you could create /etc/systemd/system/user-.slice.d/15-limits.conf.

  • In systemd v238 and earlier, the user default is set via UserTasksMax= in /etc/systemd/logind.conf. Changing the value generally requires a reboot.

More info about this:

u1686_grawity
  • 4,580
  • 20
  • 27
Hkoof
  • 1,597
  • 10
  • 11
  • 5
    And 12288 processes (minus what was already spawned before the bomb) doing nothing except *trying* to create a new one, doesn't really impact a modern system. – Mast Sep 20 '18 at 09:55
14

This won't crash modern Linux systems anymore anyway.

It creates hoards of processes but doesn't really burn all that much CPU as the processes go idle. You run out of slots in the process table before running out of RAM now.

If you're not cgroup limited as Hkoof points out, the following alteration still brings systems down:

:(){ : | :& : | :& }; :
Joshua
  • 1,733
  • 12
  • 19
  • 5
    This really depends on what you consider 'crashing' the system. Running out of slots in the process table will bring a system to it's knees in most cases, even if it doesn't completely cause a kernel panic. – Austin Hemmelgarn Sep 19 '18 at 19:03
  • 4
    @AustinHemmelgarn: Which is why wise systems reserve the last 4 or so process ids for root. – Joshua Sep 19 '18 at 19:09
  • 2
    Why would the processes go "idle"? Each forked process is in an infinite recursion of creating more processes. So it spends a lot of time in system call overhead (`fork` over and over), and the rest of its time doing the function call (incrementally using more memory for each call in the shell's call stack, presumably). – mtraceur Sep 19 '18 at 19:13
  • 4
    @mtraceur: It only happens when forking starts failing. – Joshua Sep 19 '18 at 19:14
  • 1
    Oh, I take it back. I was modeling the logic of a slightly different fork bomb implementation in my head (like this: `:(){ :& :; }; :`) instead of the one in the question. I haven't actually fully thought through the execution flow of the archetypical one as given. – mtraceur Sep 19 '18 at 19:18
  • 1
    @Joshua Except, last I checked, Linux doesn't. – Austin Hemmelgarn Sep 19 '18 at 19:21
  • 1
    OK, one way to build a more effective one: Make a fork bomb of processes that try to do something to a file on a hung hard NFS mount. Start it, let all these processes get D-stated. Fix the cause of the nfs mount being hung. Run. – rackandboneman Sep 19 '18 at 20:18
  • 1
    @rackandboneman: Yeah kinda. Anyway all I gotta do to fix this one good is ::(){ while :; do; ::&; done }; :: I've heard horror stories about this kind of thing managing to survive a bot trying to kill it from several priorities higher. – Joshua Sep 19 '18 at 20:22
  • I remember the nfs thing so vividly because it once ended me up with a load average around 900 on a single core (2.2.x or 2.4.x kernel, not sure) system.... – rackandboneman Sep 20 '18 at 00:11
  • For what it's worth I tried it under Cygwin on Windows 10 and it did bring my system to its knees ; I had to do an hard shutdown to regain control of my system. – Aaron Sep 20 '18 at 12:28
  • 1
    @Aaron: Ah; in Cygwin it's a memory bomb because `fork()` copies all the process memory immediately and bash is kinda big. – Joshua Jan 03 '19 at 20:03
9

Back in the 90's I accidentally unleashed one of these on myself. I had inadvertently set the execute bit on a C source file that had a fork() command in it. When I double-clicked it, csh tried to run it rather than open it in an editor like I wanted.

Even then, it didn't crash the system. Unix is robust enough that your account and/or the OS will have a process limit. What happens instead is it gets super sluggish, and anything that needs to start a process is likely to fail.

What's happening behind the scenes is that the process table fills up with processes that are trying to create new processes. If one of them terminates (either due to getting an error on the fork because the process table is full, or due to a desperate operator trying to restore sanity to their system), one of the other processes will merrily fork a new one to fill the void.

The "fork bomb" is basically an unintentionally self-repairing system of processes on a mission to keep your process table full. The only way to stop it is to somehow kill them all at once.

T.E.D.
  • 291
  • 1
  • 6
  • 1
    Killing them all at once is easier than you think - SIGSTOP them all first. – Score_Under Sep 20 '18 at 12:49
  • 2
    @Score_Under - I hope you'll forgive me if I don't immediately rush off to my nearest Harris Nighthawk to see if that would have fixed the problem there. I'm thinking just getting a PID an sending it the signal before it dies from the failed fork and another takes it place might be a challenge, but I'd have to try it out. – T.E.D. Sep 20 '18 at 15:37
  • @T.E.D. kill -9 -1 may be you friend here (with the same user that runs the fork bomb; not with root). – Andreas Krey Sep 21 '18 at 11:02
  • @AndreasKrey - That flag doesn't look familiar, so I'm doubting my 90's era Nighthawk had it. – T.E.D. Sep 21 '18 at 11:11
  • 1
    @T.E.D.: `-1` isn't a flag. `kill` only takes one option then stops parsing options. This kills process id `-1`, which is an alias for all processes. – Joshua Mar 13 '19 at 15:30