问题
I'm using difflib SequenceMatcher (ratio() method) to define similarity between text files. While difflib is relatively fast to compare a small set of text files e.g. 10 files of 70 kb on average comparing to each other (46 comparisons) takes about 80 seconds.
The issue here is that i have a collection of 3000 txt files (75 kb on average), a raw estimation on how much time SequenceMatcher needs to complete the comparison job is 80 days!
I tried "real_quick_ratio()" and "quick_ratio()" methods, but they don't fit to our needs.
Is there any way to speed up the comparison process? If not, is there any other faster method to do such a task? Even if it is not in Python.
回答1:
The issue you're finding is very common, since difflib
is not optimized. Here are some tricks I've found over the years while developing a tool that compares HTML documents.
Files fit in memory
Create two lists, containing the lines from each file. Then call difflib.SequenceMatcher
with the lists as parameters. The SequenceMatcher
knows how to handle lists, and the process will be much faster since it is done on a line by line basis instead of char by char. This might reduce the precision.
Take a look at fuzzy_string_cmp.py and diff.py to see how I'm doing exactly this.
Alternative
There is a great library called diff_match_patch which is available in pypi. The library will perform fast diffs between two strings and return the changes (line added, line equal, line removed).
By leveraging diff_match_patch you should be able to create your own dmp_quick_ratio
function.
In diff.py you can see how I'm using the library to get inspiration for creating dmp_quick_ratio
.
My tests showed that using diff_match_patch was 20 times faster than Python's difflib
.
回答2:
You can get a small speedup using pypy
http://pypy.org/
来源:https://stackoverflow.com/questions/25680947/pythons-difflib-sequencematcher-speed-up