8

So some background first: I am attempting to convert a non-encrypted shared folder into an encrypted one on my Synology NAS and am seeing this error:

Synology NAS Error

So I would like to locate the offending files so that I may rename them. I have come up with the following grep command: grep -rle '[^\ ]\{143,\}' * but it outputs all files with paths greater than 143 characters:

#recycle/Music/TO SORT/music/H/Hooligans----Heroes of Hifi/Metalcore Promotions - Heroes of Hifi - 03 Sly Like a Megan Fox.mp3
...

What I would like is for grep to split on / and then perform its search. Any idea on an efficient command to go about this (directory easily contains hundreds of thousands of files)?

Jeff Schaller
  • 66,199
  • 35
  • 114
  • 250
Stunner
  • 183
  • 6

3 Answers3

12

Although the GNU ‘findutils-default’ regular expression syntax doesn't provide a {n,m} interval quantifier, you can use a -regex test in GNU find if you select a different regextype, for example:

find . -regextype posix-extended -regex '.*/[^/]{143,}$'

or

find . -regextype egrep -regex '.*/[^/]{143,}$'

or

find . -regextype posix-basic -regex '.*/[^/]\{143,\}$'

etc. There may be other regextypes that support {n,m} intervals, either with or without escaping.

Compared to piping the results of find to a separate grep command, this will match across newlines (i.e. the find regex flavors differ from their namesakes in that . matches the newline character by default).

steeldriver
  • 78,509
  • 12
  • 109
  • 152
10

Try:

find /your/path | grep -E '[^/]{143,}$'
Jim L.
  • 7,188
  • 1
  • 13
  • 25
5

If you've already got a locate db, it is very fast at this.

locate --regex '.*/[^/]{143,}$'
Andrew Domaszek
  • 260
  • 1
  • 5