49

I know cat can do this, but its main purpose is to concatenate rather than just to display the content.

I also know about less and more, but I'm looking for something simple (not a pager) that just outputs the content of a file to the terminal and it's made specifically for this, if there is such thing.

Matthias Braun
  • 7,797
  • 7
  • 45
  • 54
confused00
  • 798
  • 1
  • 7
  • 13

8 Answers8

65

The most obvious one is cat. But also have a look at head and tail.

There are other shell utilities to print a file line by line: sed, awk, grep. But those are meant to manipulate the file content or to search inside the file.

I made a few tests to estimate which is the most effective one. I ran all trough strace to see which made the fewest system calls. My file has 1275 lines.

  • awk: 1355 system calls
  • cat: 51 system calls
  • grep: 1337 system calls
  • head: 93 system calls
  • tail: 130 system calls
  • sed: 1378 system calls

As you can see, even if cat was designed to concatenate files, it is the fastest and most effective one. sed, awk and grep printed the file line by line, that's why they have more that 1275 system calls.

Matthias Braun
  • 7,797
  • 7
  • 45
  • 54
chaos
  • 47,463
  • 11
  • 118
  • 144
  • 17
    Good idea to count the syscalls! – Jan Sep 03 '14 at 12:32
  • using the command `dd iflag=nonblock status=none if=/path/to/file` is right on par with the `cat` command, in terms of speed but it uses 400% more syscalls. `dd` and `cat` are part of `coreutils` on archlinux. In my tests, `cat` does about 49 syscalls, whereas `dd` does 500+. If you examine the syscalls, you will see that cat does only 3 read calls, while `dd` does 250+, which means cat is doing more buffering than `dd`. I'm sure `dd` can be further tweaked to squeeze out more performance. I just wanted to show that syscalls is not the best metric for something like this if speed is a factor. – smac89 Apr 17 '21 at 22:43
  • Btw for anyone wondering, the stats I got were measured with `strace --summary-only ` – smac89 Apr 17 '21 at 22:44
  • `zsh`'s `sysread -b 1000000 -o1` builtin only does 3 system calls (one mmap() to allocate 1M+ of memory, one read, one write). Does that make it a better one? – Stéphane Chazelas Nov 25 '22 at 19:50
  • Of those, `awk`, `grep` and `sed` are text utilties so can only be used on text files anyway. – Stéphane Chazelas Nov 25 '22 at 19:51
26

I know cat can do this, but its main purpose is to concatenate rather than just displaying the content.

The purpose of cat is exactly that, read a file and output to stdout.

Jan
  • 7,600
  • 2
  • 34
  • 41
  • 1
    But ```cat --help``` says "Concatenate FILE(s), or standard input, to standard output". I don't wanna concatenate anything – confused00 Sep 03 '14 at 11:05
  • 22
    Believe it or not, cat is exactly what you're looking for anyway. – Jan Sep 03 '14 at 11:06
  • It isn't though. Surely the UNIX philosophy "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features." can help with a tool made specifically for outputting text and nothing else? – confused00 Sep 03 '14 at 11:11
  • 5
    No,@confused00, Jan's right. The thing is - the terminal *is* stdout - you see? do `readlink /dev/fd/1` for example - you should get your tty's name in there if running at a standard prompt. So concatenating input to output is what you're asking to do. – mikeserv Sep 03 '14 at 12:00
  • 2
    @mikeserv Yeah, I see your point, I guess I was too fixed on the 'concatenate' meaning. – confused00 Sep 03 '14 at 13:44
  • 3
    The logic is, printing the contents of *one* file is just a special case of printing the contents of *one or more* files in sequence. – zwol Sep 04 '14 at 02:03
  • @Jan: I disagree that "the purpose of cat is exactly that": it's broader (to concatenate, as the name implies), and outputing a single file to stdout is a nice side effect of that. see my comment to confused00. – Olivier Dulac Sep 04 '14 at 08:43
  • 1
    Correct, the answer is simplified, but more details would IMHO have confused the user. – Jan Sep 04 '14 at 20:39
13

First, cat writes to the standard output, which is not necessarily a terminal, even if cat was typed as part of a command to an interactive shell. If you really need something to write to the terminal even when the standard output is redirected, that is not so easy (you need to specify which terminal, and there might not even be one if the command is executed from a script), though one could (ab)use the standard error output if the command is merely a part of a pipeline. But since you indicated that cat actually does the job, I suppose you were not asking about such a situation.

If your purpose were to send what is written to the standard output into a pipeline, then using cat would be eligible to the Useless Use of Cat Award, since cat file | pipeline (where pipeline stands for any pipeline) can be done more efficiently as <file pipeline. But again, from your wording I deduce that this was not your intention.

So it is not so clear what you are worrying about. If you find cat too long to type, you can define a one- or two-character alias (there are still a few such names that remain unused in standard Unix). If however you are worrying that cat is spending useless cycles, you shouldn't.

If there were a program null that takes no arguments and just copies standard input to standard output (the neutral object for pipelines), you could do what you want with <file null. There is no such program, though it would be easy to write (a C program with just a one-line main function can do the job), but calling cat without arguments (or cat - if you like to be explicit) does just that.

If there were a nocat program that takes exactly one filename argument, tries to open the file, complains it if cannot, and otherwise proceeds to copy from the file to the standard output, then that would be just what you are asking for. It is only slightly harder to write than null, the main work being opening the file, testing, and possibly complaining (if you are meticulous, you may also want to include a test that there is exacly one argument, and complain otherwise). But again cat, now provided with a single argument, does just that, so there is no need for any nocat program.

Once you succeeded in writing the nocat program, why stop at a single argument? Wrapping the code into a loop for(;*argp!=NULL;++argp) is really no effort at all, adds at most a couple of machine instructions to the binary, and avoids having to complain about a wrong number of arguments (which spares many more instructions). Voilà a primitive version of cat, concatenating files. (To be honest you need to tweak it a little bit so that without arguments it behaves as null.)

Of course in the real cat program, they added a few bells and whistles, because they always do. But the essence is that the "concatenation" aspect of cat costs really no effort at all, neither for the programmer nor for the machine executing it. The fact that cat subsumes null and nocat explains the nonexistence of such programs. Avoid using cat with a single argument if the result goes into a pipeline, but if it is used just for displaying file contents on the terminal, even the page I linked to admits that this is a useful use of cat, so don't hesitate.


You may test that cat is really implemented by a simple loop around a hypthetical nocat functionality, by calling cat with several file names among which one invalid name, not in the first position: rather than complaining right away that this file does not exists, cat first dumps the preceeding valid files, and then complains about the invalid file (at least that is how my cat behaves).

11

Under zsh try

<file

I believe it is the shortest way to print a file. It uses 'hidden' cat (or more if stdout is a terminal), but the command used for printing is controlled by READNULLCMD variable which you can safely overwrite directly by command name or even by some function. For example to print files with line numbering:

numcat() { nl -s'> ' -w2 - }
READNULLCMD=numcat
<file
jimmij
  • 46,064
  • 19
  • 123
  • 136
7

I know this is a past tense question. Technically, since printing the contents of a file to stdout is a form of concatenation, cat is semantically appropriate. Don't forget that printf is semantically intended to format and print data. Bash also provides syntax to redirect input and output from files. A combination of these might produce this:

printf '%s' "$(<file.txt)"
James M. Lay
  • 203
  • 1
  • 5
  • 4
    Apart from being particularly roundabout, the command displayed is not equivalent to `cat file.txt`, since it will remove any trailing newlines (the `$(...)` does this). – Marc van Leeuwen Sep 04 '14 at 08:30
5

POSIX define cat as:

NAME

cat - concatenate and print files

SYNOPSIS

cat [-u] [file...]

DESCRIPTION

The cat utility shall read files in sequence and shall write their contents to the standard output in the same sequence.

So I think concatenate here means read files in sequence.

cuonglm
  • 150,973
  • 38
  • 327
  • 406
5

Just as a demonstration, you can do

cp foo /dev/stdout
wisbucky
  • 3,158
  • 1
  • 30
  • 18
  • Be careful using this when building a container environment with something like docker or podman. The file descriptors in the containers are usually owned by root so attempting to do any writes or reads with an unprivileged user directly from any of those files will result in permission denied. See this issue on [github](https://github.com/moby/moby/issues/31243) – smac89 Apr 17 '21 at 21:40
4

Using bash builtins and avoiding subprocess creation:

{ while IFS='' read -rd '' _bcat_; do printf '%s\0' "${_bcat_}"; done; printf '%s' "${_bcat_}"; unset _bcat_; } <'/path/to/file'

IFS will be applied only to read command, so do not worry about your global IFS changes.

Loop required to handle null characters (thanks to Stéphane Chazelas ).

This way is not suitable for big files because file content read to the variable (so memory) first. BTW I've tried to print 39M text file this way and bash memory usage did not exceed 5M, so not sure about this case.

It also damn slow and CPU inefficient: for the same 39M file it took ~3 minutes with 100% usage of single core.

For big files or binaries better to use cat '/path/to/file' or even dd if='/path/to/file' bs=1M if possible.

Stéphane Chazelas
  • 522,931
  • 91
  • 1,010
  • 1,501
Mikhail
  • 131
  • 1
  • 3