2

I'am trying to debug a server that throws a lot of logs on stdout/stderr. I need to redirect only last N lines to a file. Something like a rotating buffer feature there in tcpdump's -C & -W flags. It would be nice if I could view the log while the server is still running and throwing the logs (I could cp it in another file to view it). Is there a utility that does this? From my little understanding of logrotate tool, it should be run repeatedly. I don't think it fits my need. What I would like is:

serverd -d | $TOOL -n 100 srv.log

...where at any time srv.log contains last 100 lines outputted from serverd.

Daniel
  • 51
  • 9
Manish
  • 121
  • 3
  • 1
    Related question: [Keep log file size fixed without logrotate](http://unix.stackexchange.com/q/439) – Caleb Aug 05 '11 at 14:32
  • Why exactly do you want this? – user606723 Aug 05 '11 at 19:42
  • @user606723: As I had mentioned, I am trying to debug a server and it outputs a lot of messages. Redirecting to a file makes it very large. At any time (typically when an issue occurs), I want to see last X number of lines only. Sometimes the server is on embedded device that doesn't have file system space. – Manish Aug 06 '11 at 14:07

3 Answers3

1

You can fetch only the last N lines of any file or input stream using tail.

command | tail -n 100 > file

However it sounds like you want a rotating stream of the last 100 lines always in a log file. This is not easily doable. You can regularly truncate the log file by deleting lines or you can use a system like logrotate to cull old data, but there is not an easy way to keep a 100 line FIFO type log file.

Caleb
  • 69,278
  • 18
  • 196
  • 226
  • don't forget about `command | trail -fn 100` (perhaps not a solution, but it could be useful) -f is for follow. This command will print out 100 lines and then wait and block on the file writes and output anything new that gets written. – user606723 Aug 05 '11 at 19:38
  • @user606723, I believe that you meant to write `tail` rather than `trail`? – user Aug 05 '11 at 20:11
  • oh, yeah, oops. – user606723 Aug 05 '11 at 20:39
  • @user606723: As far as I know, on a command like you show the `-f` flag is meaningless. Tail will run until the pipeline is closed then exit with or without the flag. In the case of reading a file I think that would accomplish almost the opposite of what the OP is after ... it would read a minimum of 100 lines and then wait for more. He wants to always truncate at 100 lines. The point here is that there is NOT a good answer to this question, I just threw a reminder in about tail so he could use it to skim off of other log files if he really did just want the last 100 lines of something. – Caleb Aug 05 '11 at 20:44
  • as I said, "perhaps not a solution, but could be useful." When I said "solution", I meant "answer". It's hard to say what his actual goal is here... I was just trying to make him aware of something else that could be useful. – user606723 Aug 05 '11 at 20:48
0

I'm certain the original poster has found a solution by now. Here's another one for others that may read this thread...

Curtail limits the size of a program's output and preserves the last 200MB of output with the following command:

$ run_program | curtail -s 200M myprogram.log

References

NOTE: I'm the maintainer of the software linked in this answer....

slm
  • 363,520
  • 117
  • 767
  • 871
Dave Wolaver
  • 71
  • 1
  • 2
0

Apache has a program called rotatelogs,

Whatever size you like log to grow to, choose half the size and set no of log files to 2 eg

<yourlogGenerator>|rotatelogs -n 2 logfile 5M

Will create two 5M logfiles and flip between them.

Here's the full description of rotatelogs.

You could add -L linkname then you could tail -f linkname

X Tian
  • 10,413
  • 2
  • 33
  • 48