14

Due to its high CPU usage i want to limit Chromium web browser by cpulimit and use terminal to run:

cpulimit -l 30 -- chromium --incognito

but it does not limit CPU usage as expected (i.e. to maximum to 30%). It again uses 100%. Why? What am I doing wrong?

Marcus Müller
  • 21,602
  • 2
  • 39
  • 54

2 Answers2

28

Yeah, chromium doesn't care much when you stop one of its threads.

cpulimit is, in 2021, really not the kind of tool that you want to use, especially not with interactive software: it "throttles" processes (or unsuccessfully tries to, in your case) by stopping and resuming them via signals. Um. That's a terrible hack, and it leads to unreliability you really don't want in a modern browser that might well be processing audio and video, or trying to scroll smoothly.

Good news is that you really don't need it. Linux has cgroups, and these can be used to limit the resource consumption of any process, or group of processes (if you, for example, don't want chromium, skype and zoom together to consume more than 50% of your overall CPU capacity). They can also be used to limit other things, like storage access speed and network transfer.

In the case of your browser, that'd boil down to (top of head, not tested):

# you might need to create the right mountpoints first
sudo mkdir /sys/fs/cgroup/cpu
sudo mount -t cgroup -o cpu cpu /sys/fs/cgroup/cpu

# Create a group that controls `cpu` allotment, called `/browser`
sudo cgcreate -g cpu:/browser
# Create a group that controls `cpu` allotment, called `/important`
sudo cgcreate -g cpu:/important

# allocate few shares to your `browser` group, and many shares of the CPU time to the `important` group.
sudo cgset -r cpu.shares=128 browser
sudo cgset -r cpu.shares=1024 important


cgexec -g cpu:browser chromium --incognitio
cgexec -g cpu:important make -j10 #or whatever

The trick is usually giving your interactive session (e.g. gnome-session) a high share, and other things a lower one.

Note that this guarantees shares; it doesn't take away, unless necessary. I.e. if your CPU can't do anything else in that time (because nothing else is running, or because everything with more shares is blocked, for example by waiting for hard drives), it will still be allocated to the browser process. But that's usually what you want: It has no downsides (it doesn't make the rest of the system run any slower, the browser is just quicker "done" with what it has to do, which on the upside probably even saves energy on the average: when multiple CPU cores are just done, then things can be clocked down/suspended automatically).

Marcus Müller
  • 21,602
  • 2
  • 39
  • 54
  • Perhaps you could explain a bit further? If I'm understanding correctly, if nothing else is using the CPU, then the browser (and its child processes) will get CPU time. But my problem (with Firefox) is that occasionally (with particular web pages) the browser grabs essentially everything, to the point where even mouse movement and such appear to freeze. I would obviously like to stop this behavior: would cgroups permit that, by e.g. never allowing Firefox to use more than a given percentage of CPU, even though most of the rest is unused? – jamesqf Jul 24 '21 at 22:20
  • 2
    @jamesqf have you confirmed that your behavior is caused by Firefox consuming too much CPU? That sort of thing can many causes, such as IO contention, GPU bugs, some other process hitting a CPU spike after Firefox interacts with it, &c. Of course, it could very well be Firefox using too much CPU, but that’s not always it. – gntskn Jul 25 '21 at 01:44
  • 2
    @jamesqf If a given CPU time slice isn't going to be used anyway, giving it to Firefox/Chromium/whatever isn't going to slow the system down any. In fact, doing so will make the system *more* responsive as opposed to just wasting the slice. – HiddenWindshield Jul 25 '21 at 01:53
  • @gntskn: I've confirmed it as well as I can - it's not easy to do when anything else takes minutes to respond. I can see CPU usage (via conky) spike to 100%, if I can get to a terminal window and run top, I can see that some Firefox child thread (usually labeled "Web Content" is using the resources, killing that thread will bring the system back to its normal few percent of CPU usage, &c. It might of course be grabbing other resources too, but I don't know a way to check for that on an unresponsive system. – jamesqf Jul 25 '21 at 04:57
  • @HiddenWindshield: But the problem seems to be Firefox grabbing time slices from everything else, to apply to a thread that appears to be thrashing in an endless loop. Everything else - which usually is just X, the window manager, and any system processes - seem to get few if any slices, so that if for instance I move the mouse, the cursor doesn't move until many seconds later. Thus if I could limit Firefox to some percentage of slices, other things would get the ones they need. Normally responsiveness is not an issue, as CPU load is only a few percent. – jamesqf Jul 25 '21 at 05:03
  • Even running very compute-intensive jobs (which I sometimes do) doesn't create the same "grab everything" behavior. It may slow responsiveness a bit, but not to the point of unusability. – jamesqf Jul 25 '21 at 05:06
  • 2
    @jamesqf I'm almost certain the problem you describe here is a bug in something else; if you just start a program that hogs CPU time, you should still be able to *do* things with your computer, as the Linux scheduler *should* allocate some share of time to other tasks. The fact that it's *this* choppy might really be due to some unholy interaction between firefox and X or your graphics driver – Marcus Müller Jul 25 '21 at 09:39
  • 2
    Cgroups are the right tool for the job, but they should probably be used through systemd, not directly. Create a new slice with a `CPUWeight` and `systemd-run` the application into it. It does not need sudo (user can control what's happening within their slice) and it sets the weight relative to the other user's applications (they already live in the `/user.slice/user-$uid.slice` cgroup). – Jan Hudec Jul 25 '21 at 16:13
  • @JanHudec is right; you should be able to do this all as unprivileged users; run `systemd-cgtop` for an impression of what is already there in slices on your system. – Marcus Müller Jul 25 '21 at 16:23
  • @Marcus Müller: Yes, I'm pretty sure that the underlying cause is some other bug in Firefox, but I don't have the time or inclination to try to debug it. I saw this thread, and hoped that it could be a workaround. – jamesqf Jul 25 '21 at 16:28
  • 1
    @jamesqf Sounds to me like you're running out of memory. If that is the case, you can reduce the number of firefox processes under the performance settings. – Nonny Moose Jul 25 '21 at 21:27
  • @jamesqf also try the Firefox extension "auto tab discard" . You can either "discard" (pause would be a better word) sites from known resources hogs, e.g. facebook, or all but a whitelist. Firefox task manager (about:performance) can help you ID the worst culprits, but only up to a point – Chris H Jul 26 '21 at 10:54
  • @Chris H: The problem with whitelisting (or blacklisting) sites is that the ones that cause problems (and I never use Facebook) are almost always news articles (from e.g. Google News), which I only want to view once. It's only a few pages within any given site that seem to cause problems: I can read dozens of articles from for instance CNN, before hitting one that causes the problem. I hadn't thought of it as being a memory problem, but I see that cgroups seems to be able to limit memory use as well. – jamesqf Jul 26 '21 at 16:42
  • @jamesqf fair enough. I started with a whitelist but found it too annoying. Script blockers help too but bring their own hassles – Chris H Jul 26 '21 at 17:31
  • How does this compare to just using `nice`? – rjh Jul 27 '21 at 10:28
  • `nice` is nice, and all, but it doesn't do quotas – Marcus Müller Jul 27 '21 at 12:46
4

I think it might be useful to focus on finding why it uses this much CPU, using the Taskmanager built-in to chromium to find which tab is using it, or a profiler like perf and flame graphs.

But if you really want to slow down the browser you should consider the modern built-in solution cgroups, see for example: https://forums.gentoo.org/viewtopic-t-1010870-start-0.html

Alternatively consider using (re)nice to lower the priority of chromium, using AdBlockers or upgrading your hardware.

Why cpulimit is not working as expected might be due to multiple threads, child processes not stopping when sigstop stops the parent proces but I'm not sure.

Update: this https://github.com/opsengine/cpulimit/issues/39 and this: https://stackoverflow.com/questions/31623697/limit-the-percentage-of-cpu-a-process-tree-is-allowed-to-use confirm my suspicion that you might not be limiting all child processes. Newer versions of cpulimit provide the option: --include-children or: -i for that.

JohannesB
  • 206
  • 1
  • 4
  • A simple trial would be to run greedy programs in a VM - with limited resources. – Jeremy Boden Jul 24 '21 at 22:09
  • But in my case, trying to find out why is futile (beyond knowing what site I went to), because the browser grabs so much CPU &c that nothing else is responsive. And much of this is caused by going to news articles from Google News, where say a particular CNN story link causes the behaviour, but many other links work perfectly well. – jamesqf Jul 24 '21 at 22:23