29

I'm using Linux 4.15, and this happens to me many times when the RAM usage reaches its top - The whole OS becomes unresponsive, frozen and useless. The only thing I see it to be working is the disk (main system partition), which is massively in use.

I don't know whether this issue is OS-specific, hardware-specific, or configuration-specific.

Any ideas?

user6039980
  • 410
  • 4
  • 12
  • 4
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/97152/discussion-on-question-by-kais-what-can-make-linux-so-unresponsive). Please make sure to update the Question as needed for any clarifications that result from the comments/chat. Thank you! – Jeff Schaller Aug 07 '19 at 15:23
  • 3
    I suspect your system is swapping a lot; can you run `vmstat 5` while your system is acting up? It's ok to start vmstat beforehand and post the rows that are printed during freezes. I'm looking specifically for the `si` and `so` columns, which indicate how much the system is actually swapping. Also, can you post the output of `top`, sorted by memory usage (shift-M)? (or whatever the equivalent htop mode is) – marcelm Aug 08 '19 at 08:19
  • Curiously, this was recently talked about on [HN](https://news.ycombinator.com/item?id=20620545). I recommend using swap to avoid it. I typically prepare 10 - 16 GB of swap. If your typical usage fills up swap like you've shown, perhaps it's time to increase the swap space. This is when using swap files is convenient. You'd just delete the old one and `fallocate -l 16G swap.img; mkswap swap.img` a new one. – JoL Aug 08 '19 at 15:29
  • 1
    When a file system is almost full it can become very slow. I'm not sure if this is likely the case with ext4. – Nobody Aug 08 '19 at 15:43
  • @JoL Thanks for your response. – user6039980 Aug 08 '19 at 16:06
  • @JoL "it's time to increase the swap space" - I'm not sure, but it has been recommended in two answers, by "sourcejedi" and "Mr. Donutz", to not increase it. What do you think? – user6039980 Aug 08 '19 at 16:08
  • 1
    @Kais Try both and see what works? My swap fills at maximum to half of what I prepared and it doesn't cause me any freezes. Whether one experiences slowdowns or not from swap use also depends on usage patterns I think, so our experiences may differ. – JoL Aug 08 '19 at 17:14
  • @JoL OK, I will give it a try. – user6039980 Aug 08 '19 at 18:35
  • 2
    Try disabling swap entirely - that will confirm or eliminate disk thrashing as the source of the problem. The point of swapping is to put unused pages on disk, but if most of the pages are really in use, then swapping won't help. If your typical workload requires 10GB of resident pages, then an 8GB machine will struggle. The answer to resource exhaustion is to either lower the workload or increase the resource (in this case, try chrome or add more physical memory). – bain Aug 09 '19 at 12:06
  • @marcelm I'm sorry for the delay, I was only able to get the results of `vmstats` after putting the system in very high resource usage. Can you check the question update? I've just mentioned the results there. Thank you. – user6039980 Aug 09 '19 at 13:52
  • 1
    As a person who frequently has chrome, ff, VSCode, and 10+ terminal sessions going at a time, one other problem I've frequently encountered is related to Intel HDA audio driver requesting a large chunk of memory (I like to listen to music while coding) which hard-locks my system requiring a reboot. – Jared Smith Aug 10 '19 at 13:56
  • Your last edit removed most of the useful, relevant information from the question. Since none of that information seems at all sensitive, I have rolled the edit back. Please let me know if you really want to remove it (and explain why because there's nothing private there that I can see at all so removing it makes the question far less useful). – terdon Sep 14 '21 at 14:08

8 Answers8

30

What can make Linux so unresponsive?

Overcommitting available RAM, which causes a large amount of swapping, can definitely do this. Remember that random access I/O on your mechanical HDD requires moving a read/write head, which can only do around 100 seeks per second.

It's usual for Linux to go totally out to lunch, if you overcommit RAM "too much". I also have a spinny disk and 8GB RAM. I have had problems with a couple of pieces of software with memory leaks. I.e. their memory usage keeps growing over time and never shrinks, so the only way to control it would have been to stop the software and then restart it. Based on the experiences I had during this, I am not very surprised to hear delays over ten minutes, if you are generating 3GB+ of swap.

You won't necessarily see this in all cases where you have more than 3GB of swap. Theory says the key concept is thrashing. On the other hand, if you are trying to switch between two different working sets, and it requires swapping 3GB in and out, at 100MB/s it will take at least 60 seconds even if the I/O pattern can be perfectly optimized. In practice, the I/O pattern will be far from optimal.

After the difficulty I had with this, I reformatted my swap space to 2GB (several times smaller than before), so the system would not be able to swap as deeply. You can do this even without messing around resizing the partition, because mkswap takes an optional size parameter.

The rough balance is between running out of memory and having processes get killed, and having the system hang for so long that you give up and reboot anyway. I don't know if a 4GB swap partition is too large; it might depend what you're doing. The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly.

Checking memory usage of multi-process applications is difficult. To see memory usage per-process without double-counting shared memory, you can use sudo atop -R, press M and m, and look in the PSIZE column. You can also use smem. smem -t -P firefox will show PSS of all your firefox processes, followed by a line with total PSS. This is the correct approach to measure total memory usage of Firefox or Chrome based browsers. (Though there are also browser-specific features for showing memory usage, which will show individual tabs).

sourcejedi
  • 48,311
  • 17
  • 143
  • 296
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/97199/discussion-on-answer-by-sourcejedi-what-can-make-linux-unresponsive-for-minutes). – Jeff Schaller Aug 08 '19 at 13:44
  • Might be worth considering the use of `ulimit` to attempt to control the processes' usage (it's tricky with a multi-process application, but might be helpful). – Toby Speight Aug 09 '19 at 07:27
  • 2
    @TobySpeight if you want to limit app memory usage then you need to use cgroups. `ulimit` really doesn't help. – sourcejedi Aug 09 '19 at 10:59
  • Yes, that's likely a better choice. It's really worth mentioning in the answer, anyway. – Toby Speight Aug 09 '19 at 11:25
  • 2
    `The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly.` <-- or, if you use a GUI, make a crontab that runs a simple script (every minute or so) that checks how much free RAM you have left, warning you of it. I made my own for Linux Mint, and i learned quite a bit from it. It's something you can try and play around with. – Ismael Miguel Aug 09 '19 at 14:31
  • @iruvar I think you got the link wrong... – eis Aug 11 '19 at 10:33
  • What's your opinion on setting /proc/sys/vm/overcommit_memory to 2 as recommended [here](http://engineering.pivotal.io/post/virtual_memory_settings_in_linux_-_the_problem_with_overcommit/) ?. Thanks – iruvar Aug 12 '19 at 00:51
  • @eis, oops. Thanks fixed now – iruvar Aug 12 '19 at 00:52
6

AFAIK, bloatware shouldn't make the OS unresponsive, so I wouldn't consider or even accept that the bloatware is the root cause of the problem

You're not going to like this, but I think bloatware is your problem (although I'm not sure if it's memory or disk which is the problem). Unfortunately, the Linux kernel is awful at handling high memory pressure situations, and is known to basically require a reboot once memory is exhausted. There are three things which lead me to believe your issue is resource exhaustion:

  1. Your disk space on root (/) and DATA is almost full. I'm not sure what you use DATA for, but I've ran into issues before with resizing my root partition too small and my system becoming inoperable.
  2. You have high-memory pressure, meaning that your RAM is almost full. When RAM starts to get full you will start to get page faults. Page faults happen when the kernel is unable to allocate enough memory for a process and must instead use some of the systems much slower swap space. This leads us to our last observation:
  3. Your swap space is almost full. There's clearly some high memory pressure on your system since both RAM and swap are almost full.

Basically, put these three together and your system doesn't have enough resources available to do much of anything. As for it's unfortunate how poorly Linux handles low-memory situations (compared to, say, the NT kernel in Windows) but that seems to be how it is. You can find more discussion in this Reddit thread and its linked mailing list.

As for how to fix your situation, I would say increasing your swap size is a good idea, but since you're low on disk space that will be a problem. Unless your Minecraft server has a ton of people, I think it would be safe to reduce its memory to something around 1024m (I personally use 1024m with about 10 people and it works fine). I would also use spigot or paper for your Minecraft server since they tend to be more performant.

Good luck!

user6039980
  • 410
  • 4
  • 12
Chase
  • 69
  • 4
  • "This leads us to our last observation:" - could you further explain? – user6039980 Aug 07 '19 at 21:31
  • "As for it's unfortunate how poorly Linux handles low-memory situations (compared to, say, the NT kernel in Windows) but that seems to be how it is" - This is clearly the point, as I noticed that the problem is not that serious in the case of Windows OS. – user6039980 Aug 07 '19 at 21:36
  • "Unless your Minecraft server .. they tend to be more performant" - sorry I don't understand your analogy. – user6039980 Aug 07 '19 at 21:45
  • that's a curious definition of "page fault" that you're using there. –  Aug 07 '19 at 22:08
  • 7
    It's clearly memory that's the problem, not disk. It's true that Linux is bad under high memory pressure. But it's not true that a reboot is required. If you manage to free up some memory, Linux will become just as responsive as it was before the memory pressure exceeded available capacity. – Gilles 'SO- stop being evil' Aug 07 '19 at 23:38
  • 1
    @Kais I said "this leads us to our last observation" as a segue since I was talking about swap space and would continue to talk about it in point 3. About Minecraft, it looked like you were running a Minecraft server, and had allocated 3G of RAM to it. I was just saying that unless you have a ton of people playing on it at the same time, you might not need that much RAM. I said "they tend to be more performant" when talking about paper and spigot, which are alternative Minecraft servers which feature better performance over vanilla MC. – Chase Aug 08 '19 at 03:06
  • 2
    I've heard that generally using swap at all is a bad idea? A least in server environment, when freezing for 12 minutes is not acceptable? –  Aug 08 '19 at 06:38
  • 2
    @Kais, in my experience, Windows is even worse with non-GUI programs, but it will suspend non-foreground GUI programs if memory pressure is high, which solves the problem for the desktop, under the assumption that desktop applications do not have background tasks. – Simon Richter Aug 08 '19 at 08:58
  • 2
    Vanilla Minecraft maybe; but large modpacks easily get to 3 GiB before a player even joins in :) – Luaan Aug 09 '19 at 06:54
  • 1
    @SimonRichter: Windows has the big advantage of being able to **discard** code pages. It locks the EXE file so you can't delete a running program, but that also means Windows can reload the code pages from the original EXE. Linux needs to copy those pages to swap. – MSalters Aug 09 '19 at 13:40
  • @MSalters: ? I'm quite sure code pages do not get copied to swap, and give ETXTBSY when you attempt to overwrite them while in use. Of course you can **unlink** a running program, it will only get deleted when the last user drops its reference. – ninjalj Aug 09 '19 at 18:44
  • 1
    @Gilles As somebody who sometimes have been meeting freezing problem as well, I've removed swap completely and now is using ZRam instead (it is pretty good even under pressure). Another good advice (for many PCs) is to enable all SysRq combinations and use Alt+SysRq+F (sometimes few time) to kill processes with high memory usage. – val - disappointed in SE Aug 10 '19 at 09:16
4

What's the output of free -m? The amount of RAM you have is pointless if we don't know how much you're using. That and I'm interested to know how much swap space is being used.

I do think you've answered your own question, though. Having open "many tabs" open in your browser can definitely slow down your system if you're never closing them, as they'll continue to consume memory regardless; when your system freezes, how many do you have open at a time?

It also makes sense if your system is freezing up from other memory-intensive tasks such as "generating a very large graph from a very complex UML diagram". That will absolutely slow down your system as it generates the graph, so that's hardly a surprise.

It really sounds like this is the way your system is supposed to behave. Either that or I'm missing something here.

By the way, HDD stats don't matter when it comes to your system becoming unresponsive since a lack of memory is almost always the culprit.

Zach Sanchez
  • 304
  • 1
  • 5
  • 1
    "That will absolutely slow down your system" - Yes this is expected, but causing an uncontrollable X session is not expected (i.e result of freezed system), where I cannot see the mouse cursor moving. – user6039980 Aug 07 '19 at 16:41
  • 1
    That actually would be expected, the behavior you're describing is exactly what happens when I use too much RAM on my system. I've even had my system clog up to the point that I couldn't switch to a text-based terminal, and I've got double the RAM as you. If you ever run into that type of situation where you can't use your X session you've gotta switch to the text-based terminal and kill the offending processes. If that fails, you'd have to do a hard reboot. Bout the best I can tell you. – Zach Sanchez Aug 07 '19 at 16:45
  • "If that failed, you'd have to do a hard reboot. Bout the best I can give you" - Were you required to do this on Windows or OSX? – user6039980 Aug 07 '19 at 16:47
  • I'm running RHEL 7 as my primary OS. Hard reboots are universal though, just hold down the power button or unplug your computer or something, unless there's something I'm missing here? – Zach Sanchez Aug 07 '19 at 16:53
  • Honestly I was never used to do this when using Windows before. When things become bloated, the window manager stays responsive. – user6039980 Aug 07 '19 at 17:09
  • 1
    @Kais macOS also becomes sluggish in low memory situations. There's really no way for the system to sensibly decide what memory it absolutely needs to keep in RAM, so switching between applications would swap in and out like crazy, to the point where the UI becomes unresponsive. – Kusalananda Aug 07 '19 at 17:32
  • 5
    ehh, it's not that there aren't much more effective ways to keep the "window manager" UI responsive. MS research wrote a whole experimental OS on a design which forbidded demand-paging. Proof of concept: Run the "window manager" in Midori, emulate Linux apps including swap. There you go, the "window manager" will stay responsive even if apps are swapping. At minimum, it could let you reliably kill some apps to release memory. Linux isn't perfect. Gnome switching from X11 to Wayland even made it significantly worse w.r.t responsiveness on overloaded systems. – sourcejedi Aug 07 '19 at 21:14
  • 2
    HDD stats can matter. One possible cause of unresponsiveness is a failing disk, which causes a huge I/O backlog. But I don't see any evidence of that happening in this case. – 200_success Aug 08 '19 at 03:38
  • 1
    @Kusalananda Which is why desktop Windows give priority boosts to foreground applications, and the currently active application gets the highest. Of course, nothing helps you if a process gets assigned a realtime priority - a realtime process that takes up 100% CPU will still make the computer completely unusable. But of course, there's no reason why you'd have a realtime process in the first place :D – Luaan Aug 09 '19 at 06:57
  • @Luaan I have a feeling that would not help if switching between foreground applications implies having to swap in the memory for that application (and swap out the stuff that no longer fits, due to the needs of that particular foreground application). Switching between applications would therefore potentially result in a sluggish system no matter how the OS gave priority to them. – Kusalananda Aug 09 '19 at 07:05
  • 1
    @Kusalananda The main difference is that you have a lag when you switch between applications (as the newly focused application gets the priority boost, and evicts the other applications from RAM), but not much competition between applications while you maintain the focus. It's kind of like the difference between copying files byte-by-byte and copying entire buffers at a time - the former gets the first bytes there faster (less lag), the latter makes the whole operation faster (more lag, but smoother application after that). – Luaan Aug 09 '19 at 07:15
4

Your htop output shows that your need of RAM is higher than its capacity (total RAM+SWAP). So the obvious first consideration to make is to reduce RAM usage or increase RAM availability.

Note that modern-day firefox versions are extremely resource-hungry, due to the way that windows/tabs are given process and memory space. The idea was to avoid crashing tabs bringing the whole browser to it's knees. Is it worth the price? Who can tell... Anyway, I've had a similar problem due to the above, since my Pentium 4 mainboard only supports 2GB of RAM. To avoid possible memory exhausted crashes, I added ~800M swap space on a spare SSD, obviously with the intention to use it as little as possible. I have achieved that by changing a setting known as the swappiness, which determines how eager the kernel is to swap memory pages out. Some useful commands as follows.

Check the current swappiness: cat /proc/sys/vm/swappiness

This may well give you a result around 60, which is quite high for maximum performance on systems with a lower load. For you, obviously this works counter-productive, so you can change the setting using a command such as sysctl vm.swappiness=1 to change the setting while the system is running.

To save these changes, you'll have to look for the file /etc/sysctl.conf. In that file, change the value or add the line vm.swappiness=1.

Mind, this is not a solution in your case, but should make a usable workaround.

Credits https://askubuntu.com/questions/103915/how-do-i-configure-swappiness

source for the answer above, includes further explanation. I found that post very helpful in my case.

Toby Speight
  • 8,460
  • 3
  • 26
  • 50
Mr. Donutz
  • 370
  • 1
  • 4
4

When I read the title my immediate thought was "not enough RAM", because I have experienced exactly this problem myself on Linux, 10+ minutes of frantic disk thrashing after opening too many browser tabs. I agree, it's dismal, and needs improvement. Windows handles this situation much better.

Some suggestions:

  • Add a memory monitor applet to your system tray so you can keep an eye on it.
  • In Firefox's preferences, set the "content process limit" to "1". As the text below the setting says: "Additional content processes can improve performance when using multiple tabs, but will also use more memory."
  • Remove or replace any memory-hungry browser addons. Keep your ad blocker, as ads eat more memory than any blocker.
  • Investigate and possibly remove any other memory-hungry programs.

However, the only true solution is to buy more RAM.

Not only will an abundance of RAM prevent this catastrophe from occurring, but it will allow the system to build up a large file cache in RAM, which your system currently can't ever do because it runs so close to the limit. A large file cache will take work away from the HDD and make almost every action on the system feel faster generally. It's worth it.

Boann
  • 622
  • 1
  • 6
  • 14
  • Great answer, thanks a lot. But regarding "Additional content processes can improve performance when using multiple tabs, but will also use more memory." - If I understand correctly, is Firefox able to open up to 8 processes per Tab, per default setting? – user6039980 Aug 10 '19 at 13:05
  • 1
    @Kais I think it's 1 process per tab. In any case, if you set the limit to 1, it will be 1 process total for all tabs, which should use less memory. – Boann Aug 10 '19 at 13:39
  • Understood, thanks again. – user6039980 Aug 10 '19 at 13:46
2

Some excellent discussion of how the problem is caused, continues and grows. I like to get ahead of problems such as you experience by throwing hardware at the initial computer’s design, and/or upgrading an existing implementation. Can you,

  • add RAM (32GB works great for many setups)

  • replace your hard disk drive with an SSD

  • add an SSD (Solid State Drive) for swap drive

  • create a swap partition in RAM (with 32 or more GB of RAM)

  • get a faster HDD

  • move to a system with faster processing and wider / faster bus architecture.

Some of these hardware upgrades/replacements can be well under $100US. These are not specificto Linux, nor your exact software implementations, but the hardware you are using does not seem adequate to your tasks.

sourcejedi
  • 48,311
  • 17
  • 143
  • 296
Old Uncle Ho
  • 129
  • 2
  • 1
    Very useful answer, thanks for pointing out the hardware replacement recommendations. – user6039980 Aug 08 '19 at 12:41
  • 1
    I do hope it helps. Do not know what type of computer or the specific equipment, so these are generic steps, in order of most likely improvement. Any or all would help out your specific slowdowns that are likely due to thrashing of cache, swap and for faster and fewer disk reads / writes in general – Old Uncle Ho Aug 08 '19 at 13:10
  • 6
    Most of those are good suggestions, but swapping to RAM is basically useless unless you're using [zram](https://en.wikipedia.org/wiki/Zram) or [zswap](https://en.wikipedia.org/wiki/Zswap) for compressed swapping to RAM - they're worthwhile, but swapping to an uncompressed ramdisk just creates exactly as much RAM pressure as it relieves (actually, very slightly more due to overhead). – cas Aug 08 '19 at 13:39
  • I'm not sure why anybody would swap to RAM, except when compressing which seems like a great idea on high-RAM/low-CPU workloads. – Peter - Reinstate Monica Aug 09 '19 at 12:28
  • 1
    @bain: how is it *ever* better to have pages swapped out to RAM vs. still mapped? They're still using just as many pages of physical RAM unless you use compression. *That's* where the value is. The only difference is more bookkeeping but maybe cleaner hardware page tables. For startup-only memory that processes basically neglect to unmap, e.g. functions / data that are only touched during startup, swap to *disk* is better because it doesn't consume any DRAM space. For background daemons that aren't used interactively, latency isn't important so again disk swap wins. – Peter Cordes Aug 10 '19 at 10:44
  • @PeterCordes Sorry, my mistake, I thought the original comment was about swapping to disk, not literally writing swapped pages *to* RAM, which of course makes no sense whatsoever. – bain Aug 12 '19 at 09:20
2

Usually it's "just" X11 that becomes unusable. To get a keystroke from your keyboard to a program, and have it show anything on screen, code in several different processes has to run. (X server to get the keystroke from the kernel, xterm or equivalent to get the event and decide to draw something, then send a message to the X server to draw a glyph from a font.)

Just waving your mouse around over a window with a web browser showing a page with a bunch of Javascript crap can result in a bunch of messages for a bunch of processes, all of which cause those processes to wake up and touch a bunch of data. Presumably including a bunch of "cached" uncompressed bitmaps. So this is highly likely to evict more things that are soon needed.

ctrl+alt+F2 to switch to another virtual console usually makes it possible to log in and run shell commands with only a couple seconds latency when something is causing swap thrashing. It's just bash; the Linux kernel isn't swappable and it has all the VT and
keyboard<->TTY code.

It may take some time before this keypress does anything, if swap is really thrashing badly. The X server, not the kernel, has to receive that keystroke and switch away from its own VT after putting the video hardware back into the right mode. The kernel is still looking for Alt+SysRq key combos, but nothing else. But once you do get switched to a text console, there's usually at most 1 user-space process between typing and characters showing on screen.

If your X server actually locked up, then ctrl+alt+F2 may not work at all.


To avoid slowdown when you're not truly thrashing, reducing "swappiness" can help. e.g. I set the /proc/sys/vm/swappiness tunable to 6 on my desktop with 16GB of RAM and a 2GB swap partition on an NVMe SSD. You can read more about tuning for interactive latency (as opposed to server throughput); any guide will mention that tunable.

But if you have any swap at all, Linux will use it before invoking the OOM killer. Keep your swap partition small, just big enough for Linux to page out really stale crap that typically really doesn't get used for a long time. (e.g. memory leaks!)

I haven't had any problems with swap being full. Modern Linux deals with having limited swap space just fine. Chromium (which I use instead of firefox) does sometimes get slowish with dozens of Stack Overflow tabs open, but The Great Suspender is a nice addon for unloading tabs when you're not using them. I think that saves significant RAM for me, although it will only unload tabs where you haven't typed anything in a textbox. It might also be available for Firefox.


As other have suggested, 16GB of RAM is really nice for interactive use with Linux. DRAM prices are relatively low currently; after spiking about 1.5 years ago, they've mostly declined again.

Peter Cordes
  • 6,328
  • 22
  • 41
  • Great answer, thanks a bunch. But regarding "a bunch of Javascript crap can result in a bunch of messages for a bunch of processes, all of which cause those processes to wake up and touch a bunch of data" - I'm wondering what are these processes, are they Firefox child processes? – user6039980 Aug 10 '19 at 12:57
  • @Kais: Window manager, web browser, X server, possibly various other X clients in a more complicated desktop. And any other processes whose windows your mouse waves over (which is what I was thinking when I wrote that sentence). e.g. in KDE, the taskbar is a separate process (`plasma`) from the `kwin` window manager. – Peter Cordes Aug 10 '19 at 13:08
  • I use LXDE, so in my case only the Openbox and the XOrg server are the processes which get to wake up? Also, what are the kind of messages that get passed to them? – user6039980 Aug 10 '19 at 13:27
  • @Kais: X11 protocol messages over a unix-domain socket. Try running `xev` sometime to see what kind of messages you can get from moving the mouse. Also try `strace xev` to see the system calls that involves for the client side. – Peter Cordes Aug 10 '19 at 13:33
  • I see, thanks. When running up the `xev` command, I've got messages only by switching to different windows and clicking on them, but it is not the case when I just move over the mouse. – user6039980 Aug 10 '19 at 13:43
  • @Kais: you don't get a flood of `MotionNotify event, serial 40, synthetic NO, window 0xc00001, ...` etc.? (Most GUI windows hopefully won't register for that event, though). xev also registers that little black box inside its white window to get enter/leave notifications, which real GUIs do use often to detect mouseovers. Also, I use mouse focus, so moving the mouse over windows generates focus messages. – Peter Cordes Aug 10 '19 at 13:49
  • @Kais: No idea but *probably* nothing wrong. I assume you can still play games like XKoules or run things like `xeyes` that need to know the current mouse position. If that works, then I wouldn't worry about `xev` not registering mouse motion over the xev window. – Peter Cordes Aug 10 '19 at 14:07
  • Alright. BTW, `xeyes` is funny :-D. Thanks for your help. – user6039980 Aug 10 '19 at 14:13
  • @zevzek: Yes, good point. My answer was assuming swap thrashing, in which case the keystroke will eventually work. I updated my answer; I didn't notice any actual false claim, but added a couple paragraphs to mention that to avoid a false impression / assumption. (If there is something specifically false I missed, please point it out.) – Peter Cordes Sep 14 '21 at 20:20
-2

What can make Linux unresponsive for minutes when browsing certain websites?

You are not using Linux right. Which becomes especially noticeable on a resource limited machine. You don’t need more RAM, nor a faster processor.

Background:

Almost every non-user program’s priority is 0.
Almost every user program’s priority is 20.

To ‘fix’ your issue:

Leave the non-user programs alone, but start changing the priorities (nice levels) of your user programs so they don’t cause you issues. Edit what launches your programs to include nice levels, going from usually not a problem, to the worst offender.

Real World Examples:

KMail:          nice -n 1 kmail -caption "%c" %i %m
LibreOffice:    nice -n 2 libreoffice --writer %U
Firefox:        nice -n 3 firefox %u
WorstOffender:  nice -n 9 {i'm a bad program}

Your WorstOffender will still become unresponsive for minutes, that’s literally a go buy a better box problem, but it now won’t be causing your entire OS (Linux) and everything else you have running to also become unresponsive.

sourcejedi
  • 48,311
  • 17
  • 143
  • 296
Michael
  • 117
  • 3
  • 2
    I've worked for decades with Linux both with lots of servers and on my own workstation (often in very limited VM setups), and have _not once_ had to fix a RAM-related performance issue with `nice -n`. "You don’t need more RAM" - he certainly needs more RAM; or could use `ulimit` to hard limit the worst offenders so his existing RAM is sufficient again. "You are not using Linux right." is completely off. – AnoE Aug 09 '19 at 11:33
  • And I’ve worked with Linux GUI installs on resource limited hardware for the last 22 years and “nice”ing works to solve keeping “Linux unresponsive for minutes.” – Michael Aug 11 '19 at 13:46
  • I don't deny that it works; I'm just saying that putting it forth as _the_ "right" way to work with Linux may not be the best thing. – AnoE Aug 13 '19 at 11:40