0

I have a java application that runs on a Linux server with physical memory(RAM) allocated as 12GB where I would see the normal utilization over a period of time as below.

sys> free -h
              total        used        free      shared  buff/cache   available
Mem:            11G        7.8G        1.6G        9.0M        2.2G        3.5G
Swap:            0B          0B          0B

Recently on increasing the load of the application, I could see the RAM utilization is almost full, and available space is very less where I could face some slowness but still application continues to work fine.

sys> free -h
              total        used        free      shared  buff/cache   available
Mem:            11G         11G        134M         17M        411M        240M
Swap:            0B          0B          0B
sys> free -h
              total        used        free      shared  buff/cache   available
Mem:            11G         11G        145M         25M        373M        204M
Swap:            0B          0B          0B

I referred to https://www.linuxatemyram.com/ where it suggested the below point.

Warning signs of a genuine low memory situation that you may want to look into:

  • available memory (or "free + buffers/cache") is close to zero
  • swap used increases or fluctuates.
  • dmesg | grep oom-killer shows the OutOfMemory-killer at work

From the above points, I don't see any OOM issue at the application level and the swap was also disabled. so neglecting the two points. One point which troubles me was available memory is less than zero where I need a clarification

Questions:

In case available is close to 0, will it end up in a System crash?

Does it mean I need to upgrade the RAM when available memory goes less?

On what basis the RAM memory should be allocated/increased?

Do we have any official recommendations/guidelines that need to follow for RAM memory allocation?

  • That's kinda based on opinion. Use programs like `top` or `htop` to see what's hogging ram, and look into swap partitions. – polemon Sep 27 '21 at 14:35
  • @polemon I don't see any process that hogs the ram in my case and no data in the swap partition. Instead, I see a gradual increase in the RAM and process utilization due to my increase in load. – ragul rangarajan Sep 28 '21 at 08:20

2 Answers2

2

In case available is close to 0, will it end up in a System crash?

Modern widely used operating systems can handle the situation, so, no, the system won't crash normally though it may become so slow, you won't be able to actually use it.

Does it mean I need to upgrade the RAM when available memory goes less?

Your available RAM is indeed too low and it may result in processes being evicted from RAM. You should probably add more RAM.

On what basis the RAM memory should be allocated/increased?

If your performance suffers (i.e. you're observing disk thrashing because of excessive SWAP usage) or you cannot complete your work because you don't have enough RAM for your tasks.

Do we have any official recommendations/guidelines that need to follow for RAM memory allocation?

None, that I'm aware of. Every person's RAM requirements are different. If we are talking about 2021, modern operating systems, desktop environments and web browsers, 4GB of RAM is the absolute minimum, though I wouldn't recommend having less than 8GB. In the end it's down to your workflow which no one knows. In certain areas people already have workstations with 128GB of RAM (video and image editing, 3D rendering, math/chemical/physical/engineering calculations and AI).

Artem S. Tashkinov
  • 26,392
  • 4
  • 33
  • 64
1

In case available is close to 0, will it end up in a System crash?

On testing in one of my servers, where I loaded the memory with almost full as below

sys> free -h
              total        used        free      shared  buff/cache   available
Mem:            11G         11G        135M         25M        187M         45M
Swap:            0B          0B          0B

Able to see my application alone (which consumed more memory) got killed by the Out of memory killer which can be referred in kernel logs

dmesg -e

[355623.918401] [21805] 553000 21805 69 21 2 0 0 rm
[355623.921381] Out of memory: Kill process 11465 (java) score 205 or sacrifice child
[355623.925379] Killed process 11465 (java), UID 553000, total-vm:6372028kB, anon-rss:2485580kB, file-rss:0kB, shmem-rss:0kB

https://www.kernel.org/doc/gorman/html/understand/understand016.html

The Out Of Memory Killer or OOM Killer is a process that the linux kernel employs when the system is critically low on memory. This situation occurs because the linux kernel has over allocated memory to its processes. ... This means that the running processes require more memory than is physically available.

  • This looks reasonable, but it may help to add a sentence or two about the meaning of the buffer/cache column. – quazgar Sep 28 '21 at 08:38
  • To get more details on what buffer/cache please refer to the manual of free https://linuxize.com/post/free-command-in-linux/ – ragul rangarajan Sep 28 '21 at 12:12