104

I have a program which produces useful information on stdout but also reads from stdin. I want to redirect its standard output to a file without providing anything on standard input. So far, so good: I can do:

program > output

and don't do anything in the tty.

However, the problem is I want to do this in the background. If I do:

program > output &

the program will get suspended ("suspended (tty input)").

If I do:

program < /dev/null > output &

the program terminates immediately because it reaches EOF.

It seems that what I need is to pipe into program something which does not do anything for an indefinite amount of time and does not read stdin. The following approaches work:

while true; do sleep 100; done | program > output &
mkfifo fifo && cat fifo | program > output &
tail -f /dev/null | program > output &

However, this is all very ugly. There has to be an elegant way, using standard Unix utilities, to "do nothing, indefinitely" (to paraphrase man true). How could I achieve this? (My main criteria for elegance here: no temporary files; no busy-waiting or periodic wakeups; no exotic utilities; as short as possible.)

a3nm
  • 8,978
  • 5
  • 28
  • 36
  • Try `su -c 'program | output &' user`. I am about to ask a similar question with creating background jobs as an acceptable method for handling a "service/daemon." I also noticed that I could not redirect `STDERR` without also redirecting `STDOUT`. The solution where programA sends `STDOUT` to `STDIN` of programB, then redirects `STDERR` to a log file: `programA 2> /var/log/programA.log | programB 2> /var/log/programB.log 1> /dev/null` – brandeded Jul 12 '12 at 19:12
  • maybe... `su -c 'while true; do true; done | cat > ~/output &' user`? – brandeded Jul 12 '12 at 19:39
  • what kind of program is that? – João Portela Jul 13 '12 at 09:38
  • João Portela: This is a program I wrote, https://gitorious.org/irctk – a3nm Jul 13 '12 at 12:37
  • Why not simply add a switch to that program you wrote? Also, I assume that if you close stdin with `1<&-` it will exit your program? – w00t Jul 19 '12 at 19:10
  • w00t: Did you mean "<&-"? Indeed, it makes the program terminate. Adding a switch is not a very elegant solution, because it feels like there should be a sensible way to reach the desired behavior by giving the adequate input to the program. – a3nm Jul 20 '12 at 12:17

10 Answers10

99

I don't think you're going to get any more elegant than the

tail -f /dev/null

that you already suggested (assuming this uses inotify internally, there should be no polling or wakeups, so other than being odd looking, it should be sufficient).

You need a utility that will run indefinitely, will keep its stdout open, but won't actually write anything to stdout, and won't exit when its stdin is closed. Something like yes actually writes to stdout. cat will exit when its stdin is closed (or whatever you re-direct into it is done). I think sleep 1000000000d might work, but the tail is clearly better. My Debian box has a tailf that shortens command slightly.

Taking a different tack, how about running the program under screen?

P.T.
  • 1,596
  • 1
  • 10
  • 10
  • 1
    I like the `tail -f /dev/null` approach the best and find it elegant enough as well, since the command usage matches the intended purpose quite closely. – jw013 Jul 12 '12 at 19:40
  • 2
    From `strace tail -f /dev/null` it seems that `tail` uses `inotify` and that wakeups occur in silly cases like `sudo touch /dev/null`. It's sad that there seems to be no better solution... I wonder which would be the right syscall to use to implement a better solution. – a3nm Jul 12 '12 at 19:59
  • 7
    @a3nm The syscall would be `pause`, but it isn't exposed directly to a shell interface. – Gilles 'SO- stop being evil' Jul 12 '12 at 22:25
  • P.T.: I know about `screen`, but this is to run multiple occurrences of the program from a shell script for testing purposes, so using `screen` is a bit overkill. – a3nm Jul 20 '12 at 12:19
  • I don't know why this answer is so popular. It is not as good as Waelj's by any measure except a few chars shorter. Its more resource intensive and its not clear how it works in the absence of inotify, which is non-standard. – sillyMunky Jul 22 '12 at 17:42
  • 3
    @sillyMunky Silly Monkey, WaelJ's answer is wrong (sends infinite zeros to stdin). – P.T. Jul 22 '12 at 18:07
  • Warning: I tried putting this into a shell script on FreeBSD. Sending SIGINT to the script fails to interrupt since `tail` keeps running, and sending SIGTERM (or SIGKILL) terminates (or kills) the shell script but `tail` keeps running in the background. – Adam Mackler Oct 11 '15 at 15:40
  • 1
    https://stackoverflow.com/questions/2935183/bash-infinite-sleep-infinite-blocking/41655546#41655546 is worth a read for why to prefer sleep loop over tailing /dev/null – jdf Sep 06 '18 at 19:43
64

sleep infinity is the clearest solution I know of.

You can use infinity because sleep accepts a floating point number*, which may be decimal, hexadecimal, infinity, or NaN, according to man strtod.

* This isn't part of the POSIX standard, so isn't as portable as tail -f /dev/null. However, it is supported in GNU coreutils (Linux) and BSD (used on Mac) (apparently not supported on newer versions of Mac — see comments).

Zaz
  • 2,489
  • 19
  • 14
  • 2
    Haha, that's really a nice approach. :) – a3nm May 07 '14 at 12:16
  • 2
    @a3nm: Thanks : ) Seems `sleep infinity` also [works on BSD and Mac](http://www.freebsd.org/cgi/man.cgi?sleep(1)). – Zaz Jul 08 '14 at 20:50
  • What kind of resources does a infinitely sleeping process take? Just RAM? – CMCDragonkai May 29 '15 at 06:37
  • @CMCDragonkai: Yes, and a negligible amount of CPU. I don't know much about how kernels deal with processes, but certain operations may take longer, e.g. counting the number of current processes. There are very few circumstances where this would actually affect you, though. – Zaz May 30 '15 at 23:37
  • 2
    `sleep infinity` doesn't work for me on Mac OS X 10.9. It just returns after a few microseconds. – Quinn Comendant Aug 09 '15 at 00:05
  • @QuinnComendant: Really? Does it give you an error message? Does it behave differently to, e.g. `sleep some-random-text`? – Zaz Aug 09 '15 at 19:03
  • `sleep infinity` and `sleep asdfasdf` both return immediately, with an exit code of 0, and no message. – Quinn Comendant Aug 10 '15 at 00:51
  • Hmm, odd. I'm not sure why that's happening. – Zaz Aug 11 '15 at 18:09
  • any ideas about why sleep infinity returns? I'm on OS X 10.10.5 and have the same thing happening, just using sleep SUPER_BIG# for now. I liked the elegance of the infinity. – yvanscher Dec 16 '15 at 17:38
  • @yvanscher: I'm afraid I don't have a Mac to test it, but I guess Macs don't have true IEEE floating point support for console commands. Does `sleep 1e99` work for you? (thats 10^91 years) – Zaz Dec 17 '15 at 02:25
  • @Zaz: I did something of the like. I can rest easy knowing my script won't exit until the end of the life of this universe. Thanks for your help. – yvanscher Dec 17 '15 at 15:05
  • Doesn't work on FreeBSD 11 either nor with the sleep builtin of mksh. Works on Solaris 11 or with the ksh93 sleep builtin. – Stéphane Chazelas Oct 05 '16 at 20:58
  • 1
    [This answer](https://stackoverflow.com/a/41655546/263061) claims that `sleep infinity` waits for 24 days at max; who's right? – nh2 Dec 11 '17 at 00:32
  • @nh2: Excellent comment! Try `strace sleep infinity` or `strace sleep 9999999999`, you'll see that the last system call is `nanosleep({2073600, 999999999}`, so the **sleep** utility is **limited to 24 days** = 2073600 seconds, no matter what you do. – Zaz Dec 11 '17 at 01:39
  • 5
    @Zaz I have investigated the issue in detail now. It turns out that you were initially correct! The `sleep` utility is **not limited to 24 days**; it is just the _first_ syscall that sleeps for 24 days, and afterwards it will do more such syscalls. See my comment here: https://stackoverflow.com/questions/2935183/bash-infinite-sleep-infinite-blocking/41655546#comment82451583_41655546 – nh2 Dec 15 '17 at 20:54
22
sleep 2147483647 | program > output &

Yes, 2^31-1 is a finite number, and it won't run forever, but I'll give you $1000 when the sleep finally times out. (Hint: one of us will be dead by then.)

  • no temporary files; check.
  • no busy-waiting or periodic wakeups; check
  • no exotic utilities; check.
  • as short as possible. Okay, it could be shorter.
Rob
  • 380
  • 2
  • 6
17

In shells that support them (ksh, zsh, bash4), you can start program as a co-process.

  • ksh: program > output |&
  • zsh, bash: coproc program > output

That starts program in background with its input redirected from a pipe. The other end of the pipe is open to the shell.

Three benefits of that approach

  • no extra process
  • you can exit the script when program dies (use wait to wait for it)
  • program will terminate (get eof on its stdin if the shell exits).
Stéphane Chazelas
  • 522,931
  • 91
  • 1,010
  • 1,501
  • That seems to work and looks like a great idea! (To be fair, I had asked for something to pipe to my command, not for a shell feature, but this was just the XY problem at play.) I'm considering accepting this answer instead of @P.T.'s one. – a3nm Aug 14 '15 at 21:26
  • 2
    @a3nm, `tail -f /dev/null` is not ideal as it does a read every second on `/dev/null` (current versions of GNU tail on Linux using inotify there is actually [a bug](http://debbugs.gnu.org/cgi/bugreport.cgi?bug=21265)). `sleep inf` or its more portable equivalent `sleep 2147483647` are better approaches for a command that sits there doing nothing IMO (note that `sleep` is built in a few shells like `ksh93` or `mksh`). – Stéphane Chazelas Aug 15 '15 at 21:03
  • `coproc` doesn't seem to be part of Mac OS X, while `tail -f /dev/null` works on Linux and Mac OS X. My version is `GNU bash, version 3.2.57(1)-release (arm64-apple-darwin20)`. Obviously S.O. has an answer for this too: https://stackoverflow.com/questions/40181521/how-to-use-coproc-on-mac-os-x-11 – Marcello Romani Feb 28 '21 at 22:27
  • Is an "elegant approach" that isn't available to most user shells, by installed base, actually elegant? Scenarios that require blocking execution, like synchronizing between concurrent/parallel threads of execution, should prioritize portability over anything else - this answer is going to fubar junior/inexperienced operators – christian elsee Mar 08 '22 at 21:44
11

You can create a binary that does just that with:

$ echo 'int main(){ pause(); }' > pause.c; make pause
Petr Skocik
  • 28,176
  • 14
  • 81
  • 141
8

Here's another suggestion using standard Unix utilities, to "do nothing, indefinitely".

sh -c 'kill -STOP $$' | program > output

This fires up a shell that is immediately sent SIGSTOP, which suspends the process. This is used as "input" to your program. The complement of SIGSTOP is SIGCONT, i.e. if you know the shell has PID 12345 you can kill -CONT 12345 to make it continue.

roaima
  • 107,089
  • 14
  • 139
  • 261
3

On Linux, you can do:

read x < /dev/fd/1 | program > output

On Linux, opening /dev/fd/x where x is a file descriptor to the writing end of a pipe, gets you the reading end of the pipe, so here the same as on the stdin of program. So basically, read will never return, because the only thing that may write to that pipe is itself, and read doesn't output anything.

It will also work on FreeBSD or Solaris, but for another reason. There, opening /dev/fd/1 gets you the same resource as open on fd 1 as you'd expect and as most systems except Linux do, so the writing end of the pipe. However, on FreeBSD and Solaris, pipes are bidirectional. So as long as program doesn't write to its stdin (no application does), read will get nothing to read from that direction of the pipe.

On systems where pipes are not bidirectional, read will probably fail with an error when attempting to read from a write-only file descriptor. Also note that not all systems have /dev/fd/x.

Stéphane Chazelas
  • 522,931
  • 91
  • 1,010
  • 1,501
  • Very nice! In fact my tests you don't need the `x` with bash; further with zsh you can just do `read` and it works (though I don't understand why!). Is this trick Linux-specific, or does it work on all *nix systems? – a3nm Aug 11 '15 at 22:45
  • @a3nm, if you do `read` alone, it will read from stdin. So if it's the terminal, it will read what you type until you press enter. – Stéphane Chazelas Aug 12 '15 at 07:10
  • sure, I understand what read does. What I don't understand is why reading from the terminal with read in a backgrounded process is blocking with bash but not with zsh. – a3nm Aug 14 '15 at 21:19
  • @a3nm, I'm not sure what you mean. What do you mean by _you can just do `read` and it works_? – Stéphane Chazelas Aug 15 '15 at 20:51
  • I'm saying that with zsh you can just do `read | program > output` and it works in the same way as what you suggested. (And I don't get why.) – a3nm Aug 23 '15 at 17:56
1

There are problems with the solutions already mentioned:

  • sleep infinity is not supported by some libc/musl versions.
  • tail -f /dev/null wakes up each time some process drops something to /dev/null.
  • pause() requires gcc to be installed
  • piping from a halted shell will keep going even after the program is terminated

A good alternative that avoids these limitations is waiting on a halted process:

mkfifo pipe; (while :; do :; done & kill -STOP $!) > pipe; command < pipe > output & wait $!

sicvolo
  • 421
  • 4
  • 4
0

Stéphane Chazelas' read solution works on Mac OS X as well if a reading fd gets opened on /dev/fd/1.

# using bash on Mac OS X
# -bash: /dev/fd/1: Permission denied
read x </dev/fd/1 | cat >/dev/null
echo ${PIPESTATUS[*]}   #  1 0

exec 3<&- 3</dev/fd/1
read x 0<&3 | cat >/dev/null
echo ${PIPESTATUS[*]}   #  0 0

To be able to kill tail -f /dev/null in a script (with SIGINT, for example) it is necessary to background the tail command and wait.

#!/bin/bash
# ctrl-c will kill tail and exit script
trap 'trap - INT; kill "$!"; exit' INT
exec tail -f /dev/null & wait $!
  • This workaround for macOS (using fd 3) doesn't actually work -- it reads from the shell's original stdin, so it will exit if you type something (which is what this question is trying to avoid). – tom May 22 '23 at 16:10
-4

Redirect /dev/zero as standard input!

program < /dev/zero > output &
WaelJ
  • 145
  • 5
  • 10
    This would give his program an infinite number of zero-bytes... which, sadly, would make it busy-loop. – Jander Jul 12 '12 at 23:51
  • 1
    This is not true jander, /dev/zero will never close, holding the pipe chain open. However, poster says he doesn't take in stdin, so no zeros will ever be transferred to program. This is not a busy loop at all, it is a pure wait. – sillyMunky Jul 22 '12 at 17:38
  • 3
    sorry, OP does use stdin, so this will wipe out his input and will be drawing from /dev/zero. I should read twice next time! If OP wasn't using stdin, this would be the most elegant solution I've seen, and would not be a busy wait. – sillyMunky Jul 22 '12 at 17:46