问题
I am working on a document search problem where given a set of documents and a search query I want to find the document closest to the query. The model that I am using is based on TfidfVectorizer in scikit. I created 4 different tf_idf vectors for all the documents by using 4 different types of tokenizers. Each tokenizer splits the string into n-grams where n is in the range 1 ... 4 .
For example:
doc_1 = "Singularity is still a confusing phenomenon in physics"
doc_2 = "Quantum theory still wins over String theory"
So model_1 will use a 1-gram tokenizer, model_2 will use a 2-gram tokenizer.
Next for a given search query, I calculate the cosine similarity between the search term and all the other documents using these 4 models.
For example, search query: Singularity in quantum physics. The search query is broken down into n-grams and tf_idf values are computed from the corresponding n-gram model.
There fore for each query-document pair I have 4 values of similarity based on the n-gram model used. For example:
1-gram similarity = 0.4370303325246957
2-gram similarity = 0.36617374546988996
3-gram similarity = 0.29519246156322099
4-gram similarity = 0.2902998188509896
All of these similarity scores are normalized on a scale of 0 to 1. Now I want to calculate an aggregated normalized score such that for any query-document pair, the higher n-gram similarity gets a really high weight. Basically, higher the ngram similarity, higher it has the impact on the overall score.
Can someone please suggest a solution?
回答1:
There are many ways to play around the numbers:
>>> onegram_sim = 0.43
>>> twogram_sim = 0.36
>>> threegram_sim = 0.29
>>> fourgram_sim = 0.29
# Sum(x) / len(list)
>>> all_sim = sum([onegram_sim, twogram_sim, threegram_sim, fourgram_sim]) / 4
>>> all_sim
0.3425
# Sum(x*x) / len(list)
>>> all_sim = sum(map(lambda x: x**2, [onegram_sim, twogram_sim, threegram_sim, fourgram_sim])) / 4
>>> all_sim
0.120675
# Product(x)
>>> from operator import mul
>>> onetofour_sim = [onegram_sim, twogram_sim, threegram_sim, fourgram_sim]
>>> reduce(mul, onetofour_sim, 1)
0.013018679999999998
Eventually whatever gets you to a better accuracy score to your ultimate task is the best solution.
Beyond your question:
To calculate document similarity, there's a long-running SemEval task call Semantic Textual Similarity https://groups.google.com/forum/#!forum/sts-semeval
The common strategies includes (not exhaustively):
Use an annotated corpus with similarity scores for pairs of sentences, extract some features, train a regressor and outputs a similarity score
Use some sort of vector space semantics (strongly recommended reading: http://www.jair.org/media/2934/live-2934-4846-jair.pdf) and then do some vector similarity scores (take a look at How to calculate cosine similarity given 2 sentence strings? - Python)
i. A subset of vector space semantics jargon will come in handy(sometimes known as word embeddings), sometimes people train a vector space with topic models/neural nets/deep learning (other related buzz words), see http://u.cs.biu.ac.il/~yogo/cvsc2015.pdf
ii. You could also use a more traditional bag-of-words vectors and compress the space with TF-IDF or any other "latent" dimensionality reduction and then use some vector similarity function to get the similarity
iii. Create a fancy vector similarity function (e.g.
cosmul
, see https://radimrehurek.com/gensim/models/word2vec.html) and then tweak the function and evaluate it on different spaces.Use some lexical resources that contains an ontology of concepts (e.g. WordNet, Cyc, etc.) and then compare the similarity by traversing the conceptual graphs (see http://www.nltk.org/howto/wordnet.html). An example would be https://github.com/alvations/pywsd/blob/master/pywsd/similarity.py
Given the above as a background, and without annotations, let's try to hack out some vector space examples:
First let's try plain ngrams with simple binary vectors:
import numpy as np
from nltk import ngrams
doc1 = "Singularity is still a confusing phenomenon in physics".split()
doc2 = "Quantum theory still wins over String theory".split()
_vec1 = list(ngrams(doc1, 3))
_vec2 = list(ngrams(doc2, 3))
# Create a full dictionary of all possible ngrams.
vec_dict = list(set(_vec1).union(_vec2))
print 'Vector Dict:', vec_dict
# Now vectorize the documents
vec1 = [1 if ng in _vec1 else 0 for ng in vec_dict]
vec2 = [1 if ng in _vec2 else 0 for ng in vec_dict]
print 'Vectorzied:', vec1, vec2
print 'Similarity:', np.dot(vec1, vec2)
[out]:
Vector Dict: [('still', 'a', 'confusing'), ('confusing', 'phenomenon', 'in'), ('theory', 'still', 'wins'), ('is', 'still', 'a'), ('over', 'String', 'theory'), ('a', 'confusing', 'phenomenon'), ('wins', 'over', 'String'), ('Singularity', 'is', 'still'), ('still', 'wins', 'over'), ('phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still')]
Vectorzied: [1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0] [0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]
Similarity: 0
Now let's try includes from 1gram to ngrams (where n = len(sent)
) and put everything in the vector dictionary with the binary ngrams:
import numpy as np
from nltk import ngrams
def everygrams(sequence):
"""
This function returns all possible ngrams for n
ranging from 1 to len(sequence).
>>> list(everygrams('a b c'.split()))
[('a',), ('b',), ('c',), ('a', 'b'), ('b', 'c'), ('a', 'b', 'c')]
"""
for n in range(1, len(sequence)+1):
for ng in ngrams(sequence, n):
yield ng
doc1 = "Singularity is still a confusing phenomenon in physics".split()
doc2 = "Quantum theory still wins over String theory".split()
_vec1 = list(everygrams(doc1))
_vec2 = list(everygrams(doc2))
# Create a full dictionary of all possible ngrams.
vec_dict = list(set(_vec1).union(_vec2))
print 'Vector Dict:', vec_dict, '\n'
# Now vectorize the documents
vec1 = [1 if ng in _vec1 else 0 for ng in vec_dict]
vec2 = [1 if ng in _vec2 else 0 for ng in vec_dict]
print 'Vectorzied:', vec1, vec2, '\n'
print 'Similarity:', np.dot(vec1, vec2), '\n'
[out]:
Vector Dict: [('still', 'a'), ('over', 'String'), ('theory', 'still', 'wins', 'over', 'String', 'theory'), ('String', 'theory'), ('physics',), ('in',), ('wins', 'over', 'String', 'theory'), ('is', 'still', 'a', 'confusing', 'phenomenon', 'in'), ('theory', 'still', 'wins'), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon'), ('a',), ('wins',), ('is', 'still', 'a'), ('Singularity', 'is'), ('phenomenon', 'in'), ('still', 'wins', 'over', 'String'), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still', 'wins', 'over'), ('a', 'confusing', 'phenomenon'), ('Singularity', 'is', 'still', 'a'), ('confusing', 'phenomenon'), ('confusing', 'phenomenon', 'in', 'physics'), ('Singularity', 'is', 'still'), ('is', 'still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('wins', 'over'), ('theory', 'still', 'wins', 'over'), ('phenomenon',), ('Quantum', 'theory', 'still', 'wins', 'over', 'String'), ('is', 'still'), ('still', 'wins', 'over'), ('is', 'still', 'a', 'confusing', 'phenomenon'), ('phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still', 'wins'), ('Quantum', 'theory', 'still'), ('a', 'confusing', 'phenomenon', 'in', 'physics'), ('Singularity', 'is', 'still', 'a', 'confusing'), ('still', 'a', 'confusing', 'phenomenon', 'in'), ('still', 'a', 'confusing'), ('is', 'still', 'a', 'confusing'), ('in', 'physics'), ('Quantum', 'theory', 'still', 'wins', 'over', 'String', 'theory'), ('confusing', 'phenomenon', 'in'), ('theory', 'still'), ('Quantum', 'theory'), ('is',), ('String',), ('over', 'String', 'theory'), ('still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('a', 'confusing'), ('still', 'wins'), ('still',), ('over',), ('still', 'a', 'confusing', 'phenomenon'), ('wins', 'over', 'String'), ('Singularity',), ('confusing',), ('theory',), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon', 'in'), ('still', 'wins', 'over', 'String', 'theory'), ('a', 'confusing', 'phenomenon', 'in'), ('Quantum',), ('theory', 'still', 'wins', 'over', 'String')]
Vectorzied: [1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0] [0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1]
Similarity: 1
Now let's try normalizing by no. of possible ngrams:
import numpy as np
from nltk import ngrams
def everygrams(sequence):
"""
This function returns all possible ngrams for n
ranging from 1 to len(sequence).
>>> list(everygrams('a b c'.split()))
[('a',), ('b',), ('c',), ('a', 'b'), ('b', 'c'), ('a', 'b', 'c')]
"""
for n in range(1, len(sequence)+1):
for ng in ngrams(sequence, n):
yield ng
doc1 = "Singularity is still a confusing phenomenon in physics".split()
doc2 = "Quantum theory still wins over String theory".split()
_vec1 = list(everygrams(doc1))
_vec2 = list(everygrams(doc2))
# Create a full dictionary of all possible ngrams.
vec_dict = list(set(_vec1).union(_vec2))
print 'Vector Dict:', vec_dict, '\n'
# Now vectorize the documents
vec1 = [1/float(len(_vec1)) if ng in _vec1 else 0 for ng in vec_dict]
vec2 = [1/float(len(_vec2)) if ng in _vec2 else 0 for ng in vec_dict]
print 'Vectorzied:', vec1, vec2, '\n'
print 'Similarity:', np.dot(vec1, vec2), '\n'
It's looking better, out:
Vector Dict: [('still', 'a'), ('over', 'String'), ('theory', 'still', 'wins', 'over', 'String', 'theory'), ('String', 'theory'), ('physics',), ('in',), ('wins', 'over', 'String', 'theory'), ('is', 'still', 'a', 'confusing', 'phenomenon', 'in'), ('theory', 'still', 'wins'), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon'), ('a',), ('wins',), ('is', 'still', 'a'), ('Singularity', 'is'), ('phenomenon', 'in'), ('still', 'wins', 'over', 'String'), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still', 'wins', 'over'), ('a', 'confusing', 'phenomenon'), ('Singularity', 'is', 'still', 'a'), ('confusing', 'phenomenon'), ('confusing', 'phenomenon', 'in', 'physics'), ('Singularity', 'is', 'still'), ('is', 'still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('wins', 'over'), ('theory', 'still', 'wins', 'over'), ('phenomenon',), ('Quantum', 'theory', 'still', 'wins', 'over', 'String'), ('is', 'still'), ('still', 'wins', 'over'), ('is', 'still', 'a', 'confusing', 'phenomenon'), ('phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still', 'wins'), ('Quantum', 'theory', 'still'), ('a', 'confusing', 'phenomenon', 'in', 'physics'), ('Singularity', 'is', 'still', 'a', 'confusing'), ('still', 'a', 'confusing', 'phenomenon', 'in'), ('still', 'a', 'confusing'), ('is', 'still', 'a', 'confusing'), ('in', 'physics'), ('Quantum', 'theory', 'still', 'wins', 'over', 'String', 'theory'), ('confusing', 'phenomenon', 'in'), ('theory', 'still'), ('Quantum', 'theory'), ('is',), ('String',), ('over', 'String', 'theory'), ('still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('a', 'confusing'), ('still', 'wins'), ('still',), ('over',), ('still', 'a', 'confusing', 'phenomenon'), ('wins', 'over', 'String'), ('Singularity',), ('confusing',), ('theory',), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon', 'in'), ('still', 'wins', 'over', 'String', 'theory'), ('a', 'confusing', 'phenomenon', 'in'), ('Quantum',), ('theory', 'still', 'wins', 'over', 'String')]
Vectorzied: [0.027777777777777776, 0, 0, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0, 0, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0, 0, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0, 0.027777777777777776, 0, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0, 0] [0, 0.03571428571428571, 0.03571428571428571, 0.03571428571428571, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0.03571428571428571, 0, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0, 0, 0, 0, 0.03571428571428571, 0.03571428571428571, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0.03571428571428571, 0.03571428571428571, 0, 0, 0, 0, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0.03571428571428571, 0, 0.03571428571428571, 0.03571428571428571, 0, 0, 0.03571428571428571, 0.03571428571428571, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0.03571428571428571, 0.03571428571428571]
Similarity: 0.000992063492063
Now let's try counting the ngrams instead of taking 1/len(_vec)
, i.e. _vec.count(ng) / len(_vec)
:
import numpy as np
from nltk import ngrams
def everygrams(sequence):
"""
This function returns all possible ngrams for n
ranging from 1 to len(sequence).
>>> list(everygrams('a b c'.split()))
[('a',), ('b',), ('c',), ('a', 'b'), ('b', 'c'), ('a', 'b', 'c')]
"""
for n in range(1, len(sequence)+1):
for ng in ngrams(sequence, n):
yield ng
doc1 = "Singularity is still a confusing phenomenon in physics".split()
doc2 = "Quantum theory still wins over String theory".split()
_vec1 = list(everygrams(doc1))
_vec2 = list(everygrams(doc2))
# Create a full dictionary of all possible ngrams.
vec_dict = list(set(_vec1).union(_vec2))
print 'Vector Dict:', vec_dict, '\n'
# Now vectorize the documents
vec1 = [_vec1.count(ng)/float(len(_vec1)) if ng in _vec1 else 0 for ng in vec_dict]
vec2 = [_vec2.count(ng)/float(len(_vec2)) if ng in _vec2 else 0 for ng in vec_dict]
print 'Vectorzied:', vec1, vec2, '\n'
print 'Similarity:', np.dot(vec1, vec2), '\n'
Unsurprisingly, since the counts are all 1, it's the same similarity score:
Vector Dict: [('still', 'a'), ('over', 'String'), ('theory', 'still', 'wins', 'over', 'String', 'theory'), ('String', 'theory'), ('physics',), ('in',), ('wins', 'over', 'String', 'theory'), ('is', 'still', 'a', 'confusing', 'phenomenon', 'in'), ('theory', 'still', 'wins'), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon'), ('a',), ('wins',), ('is', 'still', 'a'), ('Singularity', 'is'), ('phenomenon', 'in'), ('still', 'wins', 'over', 'String'), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still', 'wins', 'over'), ('a', 'confusing', 'phenomenon'), ('Singularity', 'is', 'still', 'a'), ('confusing', 'phenomenon'), ('confusing', 'phenomenon', 'in', 'physics'), ('Singularity', 'is', 'still'), ('is', 'still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('wins', 'over'), ('theory', 'still', 'wins', 'over'), ('phenomenon',), ('Quantum', 'theory', 'still', 'wins', 'over', 'String'), ('is', 'still'), ('still', 'wins', 'over'), ('is', 'still', 'a', 'confusing', 'phenomenon'), ('phenomenon', 'in', 'physics'), ('Quantum', 'theory', 'still', 'wins'), ('Quantum', 'theory', 'still'), ('a', 'confusing', 'phenomenon', 'in', 'physics'), ('Singularity', 'is', 'still', 'a', 'confusing'), ('still', 'a', 'confusing', 'phenomenon', 'in'), ('still', 'a', 'confusing'), ('is', 'still', 'a', 'confusing'), ('in', 'physics'), ('Quantum', 'theory', 'still', 'wins', 'over', 'String', 'theory'), ('confusing', 'phenomenon', 'in'), ('theory', 'still'), ('Quantum', 'theory'), ('is',), ('String',), ('over', 'String', 'theory'), ('still', 'a', 'confusing', 'phenomenon', 'in', 'physics'), ('a', 'confusing'), ('still', 'wins'), ('still',), ('over',), ('still', 'a', 'confusing', 'phenomenon'), ('wins', 'over', 'String'), ('Singularity',), ('confusing',), ('theory',), ('Singularity', 'is', 'still', 'a', 'confusing', 'phenomenon', 'in'), ('still', 'wins', 'over', 'String', 'theory'), ('a', 'confusing', 'phenomenon', 'in'), ('Quantum',), ('theory', 'still', 'wins', 'over', 'String')]
Vectorzied: [0.027777777777777776, 0, 0, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0, 0, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0, 0, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0, 0.027777777777777776, 0, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0.027777777777777776, 0, 0.027777777777777776, 0, 0.027777777777777776, 0, 0] [0, 0.03571428571428571, 0.03571428571428571, 0.03571428571428571, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0.03571428571428571, 0, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0, 0, 0, 0, 0.03571428571428571, 0.03571428571428571, 0, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0.03571428571428571, 0.03571428571428571, 0, 0, 0, 0, 0, 0, 0.03571428571428571, 0, 0.03571428571428571, 0.03571428571428571, 0, 0.03571428571428571, 0.03571428571428571, 0, 0, 0.03571428571428571, 0.03571428571428571, 0.03571428571428571, 0, 0.03571428571428571, 0, 0, 0.07142857142857142, 0, 0.03571428571428571, 0, 0.03571428571428571, 0.03571428571428571]
Similarity: 0.000992063492063
Other than ngrams you could try skipgrams too: How to compute skipgrams in python?
来源:https://stackoverflow.com/questions/31931408/normalize-ranking-score-with-weights