I have around 1 TB of images, stored in my hard disk. These are pictures taken over time of friends and family. Many of these pictures are duplicates, in the sense, same fil
I would recommend using md5deep or sha1deep. On Linux simply install package md5deep
(it is included in most Linux distributions).
Once you have it installed, simply run it in recursive mode over your whole disk and save checksums for every file on your disk into text file using command like this:
md5deep -r -l . > filelist.txt
If you like sha1
better than md5
, use sha1deep
instead (it is part of the same package).
Once you have a file, simply sort it using sort
(or pipe it into sort
in previous step):
sort < filelist.txt > filelist_sorted.txt
Now, simply look at the result using any text editor - you will quickly see all the duplicates alongside with their locations on disk.
If you are so inclined, you can write simple script in Perl or Python to remove duplicates based on this file list.