3

When I'm running a task on a computer network? I've just started to realize that if I qsub any task, then the task won't hog up my terminal, and I can do other things on the same terminal (which is quite useful even if the task only takes a single minute to finish).

And then I run qstat to see which taks have finished and which ones haven't.

http://pubs.opengroup.org/onlinepubs/009604599/utilities/qsub.html is a good explanation of qsub.

Jeff Schaller
  • 66,199
  • 35
  • 114
  • 250
InquilineKea
  • 6,152
  • 12
  • 43
  • 42

3 Answers3

5

In these cases I'd rather open another terminal. What is the reason that you don't want to do that?

Downside of running qsub, is that you have to write a tiny script file for a trivial operation, which takes you some time. I don't know how many other users are working on the same network, but the purpose is meant as a scheduler for jobs of several users on the cluster. Especially if there are no free cores available, your simple job will end up in the queue, taking you more time.

Did you consider screen as an alternative? With screen you can start and pause a different session in the same terminal. The workflow would be like this

  • working in the terminal
  • $ screen
  • your tiny jobs
  • Detach screen (Ctrl-a Ctrl-d)
  • working in the terminal
  • $ screen -r (to resume)
  • Check status of this tiny job
  • $ exit
  • And you're back
Bernhard
  • 11,992
  • 4
  • 59
  • 69
  • I have a query in this regard. Usually on the server , you login and you land on a terminal loginnode. When using screen to run jobs, let's say the job requires 15-20G of memory. On terminal login node this is difficult. Is there a way I can assign memory limit to screen? sessions? – user3138373 Apr 04 '20 at 07:36
  • @user3138373 I recommend asking this as a new question (if it is not already asked) – Bernhard Apr 04 '20 at 08:43
2

I don't see any advantage of using qsub over the standard at. The at command will take a "script" and execute it at a specific time (like "now"), using your current environment. Then you can check the status with atq or remove the job with atrm.

$ nohup ./myscript myargs & # put script in the background
# almost the same as
$ echo ./myscript myargs | at now # computer runs script independent of terminals

You do need to make sure that your myscript will not be looking for input.

Myself, I use screen in a single terminal session everywhere I go, as Bernhard suggests. Open a new window (within screen), start the script, switch back to original screen window.

Arcege
  • 22,287
  • 5
  • 56
  • 64
0

I don't see any disadvantage to using qsub to background jobs that I typically would run interactively inside of screen. If you have a cluster available, then this is an optimal solution.

Although we have a job scheduler available, for a long time I tended to use screen to background long-running jobs or achieve parallelism without a lot of qsub script-writing overhead. Eventually, the limitations of this approach became apparent, and I wrote this qsub wrapper qgo to allow me to replace & and screen with qsub:

#!/bin/bash

mkdir -p qsubscripts

qsub -w $(pwd)/qsubscripts -d $(pwd) -M [email protected] $@ -S /bin/zsh -

Note that I'm using my preferred shell (zsh) but of course you can remove this argument, or add others. The use of $@ allows for the inclusion of resource specifications like -l ncpus=4 as needed. Here's how you'd use the script:

echo 'command -a 23 -b zz' | qgo | tee jobids

The STDERR.* and STDOUT.* files will be written to qsubscripts in the current working directory. The job ids are provided on The working directory for the job is set as thecwd` as well, which makes it easier to write these short scripts.