How can I find how many lines a text file contains without opening the file in an editor or a viewer application? Is there a handy Unix console command to see the number?
5 Answers
Indeed there is. It is called wc, originally for word count, I believe, but it can do lines, words, characters, bytes (and with some implementations, the length in bytes of the longest line or the display width of the widest one). The -l option tells it to count lines (in effect, it counts the newline characters, so only properly delimited lines):
wc -l mytextfile
Or to only output the number of lines:
wc -l < mytextfile
(beware that some implementations insert blanks before that number).
- 522,931
- 91
- 1,010
- 1,501
- 40,087
- 16
- 88
- 112
-
5Yes `wc` is very useful, but it is worth noting that the *longest line length* option `-L` is quirky... see [wc -L reports a line-length of 8 for a tab-char](http://unix.stackexchange.com/questions/20551/wc-l-reports-a-line-length-of-8-for-a-tab-char-bug-or-feature) – Peter.O Jan 19 '12 at 05:16
-
when I use this command, the system not only gives my number of lines, but also the 'name of the file' right next to the 'number of lines'. I am using bash shell on Ubuntu 12.04 32 bit – Abhinav Oct 19 '13 at 08:04
-
@Abhinav Yes, it does that. If you need just the number, pipe it through `awk`: `wc -l mytextfile | awk '{print $1}'` – Kevin Oct 19 '13 at 14:23
-
Wow, this would actually be a legitimate use of cat: `cat mytextfile | wc -l`. – terdon Jul 09 '14 at 03:18
-
1@Kevin piping the output of `wc` through `awk` makes no sense. In the case of only wanting the number, this can be accomplished in pure `awk` with no difficulty (see my answer). – HalosGhost Aug 24 '14 at 19:29
-
@terdon Still a UUOC :). `wc -l < mytextfile` will do the job. – pericynthion Jun 24 '15 at 17:17
Another option would be to use grep to find the number of times a pattern is matched:
grep --regexp="$" --count /path/to/myfile.txt
In this example,
$is an expression that evaluates to a new line (enter button is pressed)--countsuppresses normal output of matches, and displays the number of times it was matched.- The
/path/to/myfile.txtis pretty obvious, I hope :)
EDIT: As mentioned by @hesse in the comments, this can be shortened to
grep -c $ path/to/file
Which would also make it standard and portable to non-GNU grep implementations.
- 522,931
- 91
- 1,010
- 1,501
- 211
- 1
- 4
-
7Indeed another option... a terribly over-engineered option, granted, but an option. – user Jan 19 '12 at 10:50
-
4
-
1@ MichaelKjörling Dunno about 'terribly' over-engineered. If I was using this in a script or similar I would definitely want to use the lighter-weight wc. If I'm at a prompt and I'm curious about a file, I'd probably use grep, because I'm more familiar with it. – DavidDraughn Jan 20 '12 at 21:59
-
1In my tests (on a GNU system), I find that `grep -c '^'` is significantly faster than `grep -c '$'`. – Stéphane Chazelas Oct 03 '16 at 14:12
I would also add that it is quite easy to do this in pure awk if you, for some reason, wished to not use wc.
$ awk 'END { print NR }' /path/to/file
The above prints the number of records (NR) present in the file at /path/to/file.
Note: unlike wc, this will not print the name of the file. So, if you only wanted the number, this would be a good alternative to cat file | wc -l.
- 4,732
- 10
- 33
- 41
-
This is good because if there's an unterminated line at the end, i.e. there's no newline character after it, `wc` will undercount. And it's not as overengineered as the `grep` option. – ijoseph Jun 09 '18 at 00:32
As @Kevin suggested, you can use wc command to count lines in a file. However, wc -l test.txt will include the file name in the result. You can use:
wc -l < test.txt
to just get the number of lines without file name in it. Give it a try.
- 1,766
- 4
- 22
- 32
As per my test, I can verify that the AWK is way faster than the other tools (GREP, SED, AWK, PERL, WC)
Here is the result of the test that I ran here
date +%x_%H:%M:%S; grep -c $ my_file.txt; date +%x_%H:%M:%S;
10/03/16_09:41:51 24579427 10/03/16_09:42:40 ~49 Seconds
date +%x_%H:%M:%S; wc -l my_file.txt; date +%x_%H:%M:%S;
10/03/16_09:35:05 24579427 my_file.txt 10/03/16_09:35:45 ~40 Seconds
date +%x_%H:%M:%S; sed -n '$=' my_file.txt; date +%x_%H:%M:%S;
10/03/16_09:33:50 24579427 10/03/16_09:34:29 ~39 Seconds
date +%x_%H:%M:%S; perl -ne 'END { $_=$.;if(!/^[0-9]+$/){$_=0;};print "$_" }' my_file.txt; date +%x_%H:%M:%S;
10/03/16_09:36:11 24579427 10/03/16_09:36:36 ~25 Seconds
date +%x_%H:%M:%S; awk 'END { print NR }' my_file.txt; date
+%x_%H:%M:%S;
10/03/16_09:43:36 24579427 10/03/16_09:43:58 ~22 Seconds
-
You may want to specify your system and describe your modus operandi (size and shape of your file, is the file loaded in cache, on how many invocations did you take an average...). In all my tests, `wc` always comes out significantly faster than anything else as you'd expect from a specialised application. – Stéphane Chazelas Oct 03 '16 at 14:14