10

I tried both commands and the command find | grep 'filename' is many many times slower than the simple find 'filename' command.

What would be a proper explanation for this behavior?

Hauke Laging
  • 88,146
  • 18
  • 125
  • 174
yoyo_fun
  • 855
  • 3
  • 9
  • 14
  • 2
    You are listing every file with find and then passing the data to grep to process. With find used on it's own you are missing the step of passing every listed file to grep to parse the output. This will therefore be quicker. – Raman Sailopal Oct 03 '17 at 10:07
  • Slower in what sense? Does the commands take a different amount of time to complete? – Kusalananda Oct 03 '17 at 10:10
  • @Kusalananda Yes it takes much longer to complete. – yoyo_fun Oct 03 '17 at 10:13
  • 1
    I can't reproduce this locally. If anything, `time find "$HOME" -name '.profile'` reports a longer time than `time find "$HOME" | grep -F '.profile'`. (17s vs. 12s). – Kusalananda Oct 03 '17 at 10:16
  • @Kusalananda Are you sure it is not a caching issue that is causing this behavior? Which command did you execute first? Also, for me the command find " $HOME | grep -F '.profile' " found much more results than "find "$HOME" -name '.profile' " – yoyo_fun Oct 03 '17 at 10:28
  • @Kusalananda If you repeat the search more times the latter results will be faster. – yoyo_fun Oct 03 '17 at 10:29
  • 2
    @JenniferAnderson I ran both repeatedly. The 17 and 12 seconds are averages. And yes, the `grep` variation will match anywhere in the `find` result, whereas matching with `find -name` would only match exactly (in this case). – Kusalananda Oct 03 '17 at 10:30
  • enclose your code samples within backticks... and add exact command used, haven't seen `find 'filename'` syntax used before.. some experiment made seems that it searches only current directory not subdirectories, while `find | grep` will have to traverse through all files in current and subdirectories recursively – Sundeep Oct 03 '17 at 10:36
  • 2
    Yes, `find filename` _would be fast_. I kinda assumed that this was a typo and that the OP meant `find -name filename`. With `find filename`, only `filename` would be examined (and nothing else). – Kusalananda Oct 03 '17 at 10:51
  • @Kusalananda but what does the -name option do? – yoyo_fun Oct 03 '17 at 12:59
  • The `-name` option instructs `find` to return all files it finds which match the provided name. e.g., `find . -name TODO` would give you all files named `TODO` in the current directory or any of its subdirectories. – Dave Sherohman Oct 03 '17 at 13:18
  • @DaveSherohman But isn't this exactly what the `file` command does without the `-name` option? – yoyo_fun Oct 03 '17 at 13:21
  • @JenniferAnderson No, see my updated answer. – Kusalananda Oct 03 '17 at 13:50
  • @JenniferAnderson - Nope. `find filename` looks at the one specific directory entry `filename` (recursing into it if it's a directory) and returns every file it finds. `find . -name filename` looks at the current directory (recursing into subdirectories) and returns only files named `filename`. Compare `find /etc` vs. `find /etc -name passwd` to see the difference. (And note that, if you're only looking for one specific file at one specific path, using `find` at all is overkill. `ls` will do the job just as well, and likely with less overhead.) – Dave Sherohman Oct 04 '17 at 08:26

5 Answers5

11

(I'm assuming GNU find here)

Using just

find filename

would be quick, because it would just return filename, or the names inside filename if it's a directory, or an error if that name did not exist in the current directory. It's a very quick operation, similar to ls filename (but recursive if filename is a directory).

In contrast,

find | grep filename

would allow find to generate a list of all names from the current directory and below, which grep would then filter. This would obviously be a much slower operation.

I'm assuming that what was actually intended was

find . -type f -name 'filename'

This would look for filename as the name of a regular file anywhere in the current directory or below.

This will be as quick (or comparably quick) as find | grep filename, but the grep solution would match filename against the full path of each found name, similarly to what -path '*filename*' would do with find.


The confusion comes from a misunderstanding of how find works.

The utility takes a number of paths and returns all names beneath these paths.

You may then restrict the returned names using various tests that may act on the filename, the path, the timestamp, the file size, the file type, etc.

When you say

find a b c

you ask find to list every name available under the three paths a, b and c. If these happens to be names of regular files in the current directory, then these will be returned. If any of them happens to be the name of a directory, then it will be returned along with all further names inside that directory.

When I do

find . -type f -name 'filename'

This generates a list of all names in the current directory (.) and below. Then it restricts the names to those of regular files, i.e. not directories etc., with -type f. Then there is a further restriction to names that matches filename using -name 'filename'. The string filename may be a filename globbing pattern, such as *.txt (just remember to quote it!).

Example:

The following seems to "find" the file called .profile in my home directory:

$ pwd
/home/kk
$ find .profile
.profile

But in fact, it just returns all names at the path .profile (there is only one name, and that is of this file).

Then I cd up one level and try again:

$ cd ..
$ pwd
/home
$ find .profile
find: .profile: No such file or directory

The find command can now not find any path called .profile.

However, if I get it to look at the current directory, and then restrict the returned names to only .profile, it finds it from there as well:

$ pwd
/home
$ find . -name '.profile'
./kk/.profile
Kusalananda
  • 320,670
  • 36
  • 633
  • 936
  • 1
    `find filename` would return only `filename` if `filename` was not of type _directory_ (or was of type directory, but did not have any entry itself) – Stéphane Chazelas Oct 03 '17 at 11:52
2

Non-Technical explanation: Looking for Jack in a crowd is faster than looking for everyone in a crowd and eliminating all from consideration except Jack.

S Renalds
  • 21
  • 1
  • The problem is that the OP is expecting Jack to be the only person in the crowd. If it is, they're lucky. `find jack` will list `jack` if it's a file called `jack`, or all names in the directory if it's a directory. It's a misunderstanding of how `find` works. – Kusalananda Oct 04 '17 at 13:56
1

I have not understood the problem yet but can provide some more insights.

Like for Kusalananda the find | grep call is clearly faster on my system which does not make much sense. At first I assumed some kind of buffering problem; that writing to the console slows down the time to the next syscall for reading the next file name. Writing to a pipe is very fast: about 40MiB/s even for 32-byte writes (on my rather slow system; 300 MiB/s for a block size of 1MiB). Thus I assumed that find can read from the file system faster when writing to a pipe (or file) so that the two operations reading file paths and writing to the console could run in parallel (which find as a single thread process cannot do on its own.

It's find's fault

Comparing the two calls

:> time find "$HOME"/ -name '*.txt' >/dev/null

real    0m0.965s
user    0m0.532s
sys     0m0.423s

and

:> time find "$HOME"/ >/dev/null

real    0m0.653s
user    0m0.242s
sys     0m0.405s

shows that find does something incredibly stupid (whatever that may be). It just turns out to be quite incompetent at executing -name '*.txt'.

Might depend on the input / output ratio

You might think that find -name wins if there is very little to write. But ist just gets more embarrassing for find. It loses even if there is nothing to write at all against 200K files (13M of pipe data) for grep:

time find /usr -name lwevhewoivhol

find can be as fast as grep, though

It turns out that find's stupidity with name does not extend to other tests. Use a regex instead and the problem is gone:

:> time find "$HOME"/ -regex '\.txt$' >/dev/null     

real    0m0.679s
user    0m0.264s
sys     0m0.410s

I guess this can be considered a bug. Anyone willing to file a bug report? My version is find (GNU findutils) 4.6.0

Hauke Laging
  • 88,146
  • 18
  • 125
  • 174
  • How repeatable are your timings? If you did the `-name` test first, then it may have been slower due to the directory contents not being cached. (When testing `-name` and `-regex` I find they take roughly the same time, at least once the cache effect has been taken into consideration. Of course it may just be a different version of `find`...) – psmears Oct 03 '17 at 16:09
  • @psmears Of course, I have done these tests several times. The caching problem has been mentioned even in the comments to the question before the first answer. My `find` version is find (GNU findutils) 4.6.0 – Hauke Laging Oct 03 '17 at 18:24
  • Why is it surprising that adding `-name '*.txt'` slows down `find`? It has to do extra work, testing each filename. – Barmar Oct 04 '17 at 18:01
  • @Barmar One the one hand this extra work can be done extremely fast. On the other hand this extra work saves other work. `find` has to write less data. And writing to a pipe is a much slower operation. – Hauke Laging Oct 05 '17 at 08:10
  • Writing to a disk is very slow, writing to a pipe is not so bad, it just copies to a kernel buffer. Notice that in your first test, writing more to `/dev/null` somehow used *less* system time. – Barmar Oct 05 '17 at 15:39
0

Notice: I'll assume that you mean find . -name filename (otherwise, you're looking for different things; find filename actually looks into a path called filename, which might contain almost no files, hence exiting really quickly).


Suppose you have a directory holding five thousand files. On most filesystems, these files are actually stored in a tree structure, which allows to quickly locate any one given file.

So when you ask find to locate a file whose name only requires checking, find will ask for that file, and that file only, to the underlying filesystem, which will read very few pages from the mass storage. So if the filesystem is worth its salt, this operation will run much faster than traversing the whole tree to retrieve all entries.

When you ask for plain find however that's exactly what you do, you traverse the whole tree, reading. Every. Single. Entry. With large directories, this might be a problem (it's exactly the reason why several softwares, needing to store lots of files on disk, will create "directory trees" two or three components deep: this way, every single leaf only needs to hold fewer files).

LSerni
  • 4,305
  • 14
  • 20
-2

Lets assume the file /john/paul/george/ringo/beatles exists and the file you are searching for is called 'stones'

find / stones

find will compare 'beatles' to 'stones' and drop it when the 's' and 'b' don't match.

find / | grep stones

In this case find will pass '/john/paul/george/ringo/beatles' to grep and grep will have to work its way through the entire path before determining if its a match.

grep is therefore doing far more work which is why it takes longer

Paranoid
  • 305
  • 1
  • 3
  • 1
    Have you given that a try? – Hauke Laging Oct 03 '17 at 12:37
  • 3
    The cost of the string comparisons (extremely simple and cheap) is completely dwarfed by the IO (or just syscall if cached) cost of the directory lookups . – Mat Oct 03 '17 at 12:44
  • grep isn't a string comparison, its regular expression comparison which means it has to work its way through the entire string until it either finds a match or reaches the end. The directory lookups are the same no matter what. – Paranoid Oct 03 '17 at 13:00
  • @Paranoid Hm, what version of _find_ are you talking about? It's apparently not anything like the _find_ I'm used to in debian. – pipe Oct 03 '17 at 19:11