16

There is /location/of/thefile, which is a continuously changing logfile. The average density of refreshes is 4 per minute, the possible maximal refresh rate could be 30-40 per minute. Every refresh adds 2-5 lines (average), but it could be hundreds in extreme cases. Every line begins with a [YYYY-MM-DD HH:MM:SS] timestamp followed by plaintext (100-200, max. a few hundred characters).

My task is to construct a simple command which continuously watches this logfile, and sends to the stdout every lines that contain the foo OR bar alphabetical strings. Before and after those (sub)strings there could be any characters (\n only after the (sub)string, of course), even \0. The capitalization of the words could be all of the possible variations.

Well, my ideas for the solution always contain syscalls for the timing, but I shouldn't use them. Please construct me a simple command. Thanks a very lot!

Gilles 'SO- stop being evil'
  • 807,993
  • 194
  • 1,674
  • 2,175
  • This is often called “tailing”, from [`tail -f`](http://unix.stackexchange.com/questions/10834/reading-from-a-continuously-changing-logfile/10837#10837). See [tag:tail] for other related questions, where you'll find fancier programs that can filter and color log lines. – Gilles 'SO- stop being evil' Apr 07 '11 at 21:17

2 Answers2

30

I might be misunderstanding the question, but is there a reason you can't use this?

tail -f /location/of/thefile | grep -i -E "foo|bar"

Sean C.
  • 2,340
  • 17
  • 11
  • Does piping the output of `tail -f` to `grep` really work like that? If so, I'm going to have to start using that myself! For a case like this I would have suggested a `watch` command, but if this does indeed work it's so much better! – Kromey Apr 07 '11 at 17:20
  • @Kromey: how would you expect it to work? – mattdm Apr 07 '11 at 17:57
  • `tail -f` just continually streams output to stdout, right? I'd always been under the belief that all Unix redirection operators wait until all the output/input is ready and _then_ move it along, i.e. buffer it all until the sending program/file is done. Thus I wouldn't expect the `|` in Sean's command here to send anything along to `grep` until `tail` is done spitting out lines, which of course with the `-f` flag it won't ever do until it is interrupted. (I'm not at a *nix box to try this out, though, otherwise I would have just tested it instead of asking.) – Kromey Apr 07 '11 at 18:03
  • 2
    It works, I use it lots; most of the time to track mail for whiney users. `tail -f /var/log/mail.log | grep -i "[email protected]"` – Sean C. Apr 07 '11 at 18:08
  • 6
    @Kromey, depends on the command after the pipe. If it is `sort` or `wc` it will wait the `end-of-file` to start sorting, if it is `grep` or `sed` or another line processing command, it will process input every `end-of-line`, which is the default character for flushing the i/o stream buffer. – forcefsck Apr 07 '11 at 19:25
  • Thanks, guys! Another useful tool to add to my belt! :-) – Kromey Apr 07 '11 at 19:35
0

use named pipe like this:

Let us assume we want to read file logfile.txt continuously and execute code, that we store in readfile.sh.

# readfile.sh

while read line
do
  echo $line
done < mypipe
$ mkfifo mypipe
$ #this will block (will not exit)
$ ./readfile.sh

In another shell, but the same directory (where file mypipe is located):

$ # This will go to background
$ tail -f logfile.txt >> mypipe &

Done. Whatever keeps coming into logfile.txt will get printed via readfile.sh which didn't exit.

xorox
  • 103
  • 4
biju
  • 1