5

Warning: DO NOT attempt the commands listed in this question without knowing their implications.

Sorry if this is a duplicate. I am surprised to learn that a command as simple as

echo $(yes)

freezes my computer (actually it is lagging the computer very badly rather than freezing, but the lag is bad enough to make one think it has frozen). Typing CtrlC or CtrlZ right after typing this command does not seem to help me recover from this mistyped command.

On the other hand

ls /*/../*/../*/../*/../*/

is a well-known vulnerability that also lags the computer badly to the best and crashes the computer to the worst.

Note that these commands are quite different from the well-known fork bombs.

My question is: Is there a way to interrupt such commands which build up huge amount of shell command line options immediately after I start to execute them in the shell?

My understanding is that since shell expansion is done before the command is executed, the usual way to interrupt a command does not work because the command is not even running when the lag happens, but I also want to confirm that my understanding is correct, and I am extremely interested to learn any way to cancel the shell expansion before it uses too much memory.

I am not looking for how the kernel works at low memory. I am also not looking for SysRq overkills that may be helpful when the system already lags terribly. Nor am I looking for preventative approaches like imposing a ulimit on memory. I am looking for a way that can effectively cancel a huge shell expansion process from within the shell itself before it lags the system. I don't know whether it is possible. If it is impossible as commented, please also leave an answer indicating that, preferably with explanations.

I have chosen not to include any system-specific information in the original question because I want a general answer, but in case this matters, here are the information about my system: Ubuntu 16.04.4 LTS with gnome-terminal and bash 4.3.48(1), running a x86_64 system. No virtual machines involved.

Weijun Zhou
  • 3,338
  • 16
  • 42
  • FWIW, I can cancel the globbing command without any difficulty on bash 4.4.18(1)-release (Ubuntu 16.04). – muru Mar 13 '18 at 03:56
  • Did you use any terminal simulator? – Weijun Zhou Mar 13 '18 at 03:57
  • If you mean a VM, yes, I'm running Ubuntu in Virtual Box on an MBP. But then, I have no problems cancelling the command on bash 4.4.19(1)-release on macOS on the same MBP. – muru Mar 13 '18 at 03:59
  • 2
    Possible duplicate of [System hanging when it runs out of memory](https://unix.stackexchange.com/questions/28175/system-hanging-when-it-runs-out-of-memory) – muru Mar 13 '18 at 04:19
  • 2
    You could set your shell process's memory ulimit to a suitable value. – Toby Speight Mar 13 '18 at 09:01
  • Thank you. Why not turn it into an answer? Impossible is an answer too. – Weijun Zhou Mar 13 '18 at 16:18
  • @muru I don't think that's a dupe. None of the answers there address the main question which is how this process can be stopped from within the shell that launched it. The SysRq-F suggestion isn't targeting the process and the other answers only discuss the cause, but offer no solution. – terdon Mar 14 '18 at 11:14
  • I can cancel the `ls` command easily immediately after launching, but if I let it run a few seconds, it takes multiple Ctrl+C to eventually kill it. @muru can you also cancel the `echo $(yes)` on your system? – terdon Mar 14 '18 at 11:17
  • @terdon not with `echo $(yes)`. And here, unlike with the globbing, there's simply high speed consumption of memory, and little you can do. What little there is is in that thread - summon the OOM killer, or set limits beforehand. – muru Mar 14 '18 at 11:25
  • @terdon Just tested again. The behavior for globbing is as you said. It can be cancelled right away, but if I let it run a few seconds I need multiple CtrlC. On the other hand, I cannot cancel `echo $(yes)`, which means there are some difference between these two that I am interested to know. – Weijun Zhou Mar 14 '18 at 19:48
  • 2
    @WeijunZhou the difference is most likely that the `$(yes)` command runs in a subshell and, I assume, doesn't return control to the parent shell until the command has finished. Since `yes` never finishes, control isn't returned. Not sure enough to post this as an answer though. – terdon Mar 14 '18 at 22:02
  • Running on a mac, I listed available terminal control sequences with `stty -a`, then was able to use `^t` to get the process identifier and kill it in another shell. I was unable to rescue the shell in either case. – hhoke1 Mar 28 '18 at 21:34
  • Thank you for your information. There are not much info about Mac before you posted this. – Weijun Zhou Mar 28 '18 at 21:36
  • You could try killing the bash process that's running the `echo` command. Although, as @terdon pointed out, that may not work as expected due to the subshell process. – ILMostro_7 Mar 30 '18 at 04:39
  • Thank you. If I haven't set a ulimit beforehand it seems that I have to respond really quickly to find out the PID and then kill the bash process before it lags my computer (it starts lagging within 5 secs for my box) at least for the 1st command. – Weijun Zhou Mar 30 '18 at 04:52
  • I think `timeout` may solve your problem if you know the duration you expect it to run for. this will at-least stop computer from freezing. – Devidas Apr 02 '18 at 12:19
  • Thank you for your information. I have tried it before and I think I may have better luck with the memory limitation using the timeout command. Anyway I am currently looking for a way to interrupt it from inside the shell, or a well-formed answer explaining why it is not possible. I think most of the comments so far, including yours, are helpful towards a final answer. I just need someone to combine them to an answer and I think I am not good enough at this topic to put all the pieces together. – Weijun Zhou Apr 02 '18 at 14:36

1 Answers1

1

As of GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu) and I am not using a VM:

echo $(yes) 

exists the shell and does not freeze the system, and:

ls /*/../*/../*/../*/../*/

returns

bash: /bin/ls: Argument list too long

But as a rule, when you dealing with something that could get all the resources of a system is better to set limits before running, if you know that a process could be a cpu hog you could start it with cpulimit or run renice.

If you want to limit the processes that are already started, you will have to do it one by one by PID, but you can have a batch script to do that like the one below:

#!/bin/bash
LIMIT_PIDS=$(pgrep tesseract)   # PIDs in queue replace tesseract with your name
echo $LIMIT_PIDS
for i in $LIMIT_PIDS
do
    cpulimit -p $i -l 10 -z &   # to 10 percent processes
done

In my case pypdfocr launches the greedy tesseract.

Also in some cases were your CPU is pretty good you can just use a renice like this:

watch -n5 'pidof tesseract | xargs -L1 sudo renice +19'
Eduard Florinescu
  • 11,153
  • 18
  • 57
  • 67