What I am striving to complete is a program which reads in a file and will compare each sentence according to the original sentence. The sentence which is a perfect match to
fuzzyset is much faster than fuzzywuzzy
(difflib
) for both indexing and searching.
from fuzzyset import FuzzySet
corpus = """It was a murky and stormy night. I was all alone sitting on a crimson chair. I was not completely alone as I had three felines
It was a murky and tempestuous night. I was all alone sitting on a crimson cathedra. I was not completely alone as I had three felines
I was all alone sitting on a crimson cathedra. I was not completely alone as I had three felines. It was a murky and tempestuous night.
It was a dark and stormy night. I was not alone. I was not sitting on a red chair. I had three cats."""
corpus = [line.lstrip() for line in corpus.split("\n")]
fs = FuzzySet(corpus)
query = "It was a dark and stormy night. I was all alone sitting on a red chair. I was not completely alone as I had three cats."
fs.get(query)
# [(0.873015873015873, 'It was a murky and stormy night. I was all alone sitting on a crimson chair. I was not completely alone as I had three felines')]
Warning: Be careful not to mix unicode
and bytes
in your fuzzyset.
The task is called Paraphrase Identification which is an active area of research in Natural Language Processing. I have linked several state of the art papers many of which you can find open source code on GitHub for.
Note that all the answered question assume that there is some string/surface similarity between the two sentences while in reality two sentences with little string similarity can be semantically similar.
If you're interested in that kind of similarity you can use Skip-Thoughts. Install the software according to the GitHub guides and go to paraphrase detection section in readme:
import skipthoughts
model = skipthoughts.load_model()
vectors = skipthoughts.encode(model, X_sentences)
This converts your sentences (X_sentences) to vectors. Later you can find the similarity of two vectors by:
similarity = 1 - scipy.spatial.distance.cosine(vectors[0], vectors[1])
where we are assuming vector[0] and vector1 are the corresponding vector to X_sentences[0], X_sentences1 which you wanted to find their scores.
There are other models to convert a sentence to a vector which you can find here.
Once you convert your sentences into vectors the similarity is just a matter of finding the Cosine similarity between those vectors.
Update in 2020 There is this new model called BERT released by Google based on a deep learning framework called Tensorflow. There is also an implementation that many people find easier to use called Transformers. What these programs do, is that they accept two phrases or sentences, and they are able to be trained to say if these two phrases/sentences are the same or not. To train them, you need a number of sentences with labels 1 or 0 (if they have the same meaning or not). You train these models using your training data (already labelled data), and then you'll be able to use the trained model to make prediction for a new pair of phrases/sentences. You can find how to train (they call it fine-tune) these models on their corresponding github pages or in many other places such as this.
There are also already labelled training data available in English called MRPC (microsoft paraphrase identification corpus). Note that there multilingual or language-specific versions of BERT also exists so this model can be extended (e.g. trained) in other languages as well.
There is a module in the standard library (called difflib) that can compare strings and return a score based on their similarity. The SequenceMatcher class should do what you are after.
EDIT: Small example from python prompt:
>>> from difflib import SequenceMatcher as SM
>>> s1 = ' It was a dark and stormy night. I was all alone sitting on a red chair. I was not completely alone as I had three cats.'
>>> s2 = ' It was a murky and stormy night. I was all alone sitting on a crimson chair. I was not completely alone as I had three felines.'
>>> SM(None, s1, s2).ratio()
0.9112903225806451
HTH!
There is a package called fuzzywuzzy. Install via pip:
pip install fuzzywuzzy
Simple usage:
>>> from fuzzywuzzy import fuzz
>>> fuzz.ratio("this is a test", "this is a test!")
96
The package is built on top of difflib
. Why not just use that, you ask? Apart from being a bit simpler, it has a number of different matching methods (like token order insensitivity, partial string matching) which make it more powerful in practice. The process.extract
functions are especially useful: find the best matching strings and ratios from a set. From their readme:
Partial Ratio
>>> fuzz.partial_ratio("this is a test", "this is a test!")
100
Token Sort Ratio
>>> fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
90
>>> fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
100
Token Set Ratio
>>> fuzz.token_sort_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
84
>>> fuzz.token_set_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
100
Process
>>> choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
>>> process.extract("new york jets", choices, limit=2)
[('New York Jets', 100), ('New York Giants', 78)]
>>> process.extractOne("cowboys", choices)
("Dallas Cowboys", 90)