4

I am backing up files, and I have a lot of files duplicated in multiple locations. I've used fdupes to find duplicates, but I'm actually looking for some sort of inverse of this tool.

I want to see if dir A and its sub directories contain any file that dir B does not contain. I'd like to see a list of files, if that would be possible, based on the contents of the file (comparing file size and hash).

Does any such tool already exist? (Or am I even approaching this completely wrong)

terdon
  • 234,489
  • 66
  • 447
  • 667
user717572
  • 193
  • 1
  • 3
  • Can you assume anything about the names and locations of identical files? For example, if `tree1/somewhere/foo` is identical to `tree2/elsewhere/bar` but there is no `tree1/elsewhere/bar` or `tree2/somewhere/foo`, should they be included in the report, or omitted? – Gilles 'SO- stop being evil' Jan 31 '15 at 23:24

2 Answers2

4

You could try:

diff --brief -r dir1/ dir2/ > logoutputtoafile.log

Remove --brief if you wish more detail.

devnull
  • 5,331
  • 21
  • 36
0

The comparison utility 'meld' is awfully easy to use for that sort of thing but of course you just 'see' the differences, you can't (I think) save them or log them in any way.

Ray Andrews
  • 2,125
  • 4
  • 20
  • 37