I'm sure there are many ways to do this: how can I count the number of lines in a text file?
$ <cmd> file.txt
1020 lines
I'm sure there are many ways to do this: how can I count the number of lines in a text file?
$ <cmd> file.txt
1020 lines
The standard way is with wc, which takes arguments to specify what it should count (bytes, chars, words, etc.); -l is for lines:
$ wc -l file.txt
1020 file.txt
Steven D forgot GNU sed:
sed -n '$=' file.txt
Also, if you want the count without outputting the filename and you're using wc:
wc -l < file.txt
Just for the heck of it:
cat -n file.txt | tail -n 1 | cut -f1
As Michael said, wc -l is the way to go. But, just in case you inexplicably have bash, perl, or awk but not wc, here are a few more solutions:
$ LINECT=0; while read -r LINE; do (( LINECT++ )); done < file.txt; echo $LINECT
$ perl -lne 'END { print $. }' file.txt
and the far less readable:
$ perl -lne '}{ print $.' file.txt
$ awk 'END {print NR}' file.txt
Word of warning when using
wc -l
because wc -l functions by counting \n, if the last line in your file doesn't end in a newline effectively the line count will be off by 1. (hence the old convention leaving newline at the end of your file)
Since I can never be sure if any given file follows the convention of ending the last line with a newline or not, I recommend using any of these alternate commands which will include the last line in the count regardless of newline or not.
sed -n $= filename
perl -lne 'END { print $. }' filename
awk 'END {print NR}' filename
grep -c '' filename
You can always use the command grep as follows:
grep -c "^" file.txt
It will count all the actual rows of file.txt, whether or not its last row contains a LF character at the end.
In case you only have bash and absolutely no external tools available, you could also do the following:
count=0
while read
do
((count=$count+1))
done <file.txt
echo $count
Explanation: the loop reads standard input line by line (read; since we do nothing with the read input anyway, no variable is provided to store it in), and increases the variable count each time. Due to redirection (<file.txt after done), standard input for the loop is from file.txt.
If you're looking to count smaller files a simple wc -l file.txt could work.
Looking for an answer to this question myself, working with large files that are several gigs, I found the following tool:
https://github.com/crioux/turbo-linecount
Also, depending on your system configuration--if you're using an older version of wc you might be better off piping larger chunks with dd like so:
dd if={file_path} bs=128M | wc -l
grep -c $ is very simple and works great.
I even saved it as an alias since I use it a lot (lc stand for line count):
alias lc="grep -c $"
It can be used either this way:
lc myFile
Or that way:
cat myFile | lc
Note that this will not count the last line if it is empty. For my uses that is almost always OK though.