问题
Cosine similarity between two equally-sized vectors (of reals) is defined as the dot product divided by the product of the norms.
To represent vectors, I have a large table of float
arrays, e.g. CREATE TABLE foo(vec float[])'
. Given a certain float
array, I need to quickly (with an index, not a seqscan) find the closest arrays in that table by cosine similarity, e.g. SELECT * FROM foo ORDER BY cos_sim(vec, ARRAY[1.0, 4.5, 2.2]) DESC LIMIT 10;
But what do I use?
pg_trgm
's cosine similarity support is different. It compares text, and I'm not sure what it does exactly. An extension called smlar
(here) also has cosine similarity support for float arrays but again is doing something different. What I described is commonly used in data analysis to compare features of documents, so I was thinking there'd be support in Postgres for it.
回答1:
I gather that no extension that does this, so I've found a limited workaround:
If A and B are both normalized (length 1), cos(A, B) = 1 - 0.5 * ||A - B||^2
. ||A - B||
is the Euclidean distance, and cos(A, B)
is the cosine similarity. So greater Euclidean distance <=> lesser cosine similarity (makes sense intuitively if you imagine a unit circle), and if you have non-normal vectors, changing their magnitudes without changing their directions doesn't affect their cosine similarities. Great, so I can normalize my vectors and compare their Euclidean distances...
There's a nice answer here about Cube, which supports n-dimensional points and GiST indexes on Euclidean distance, but it only supports 100 or fewer dimensions (can be hacked higher, but I had issues around 135 and higher, so now I'm afraid). Also requires Postgres 9.6 or later.
So:
- Make sure I don't care about having at most 100 dimensions. Upgrade to Postgres 9.6 or later.
- Fill my table with arrays to represent vectors.
- Normalize the vectors to create an extra column of
cube
points. Create a GiST index on this column. - Order by Euclidean distance ascending to get cosine similarity descending:
EXPLAIN SELECT * FROM mytable ORDER BY normalized <-> cube(array[1,2,3,4,5,6,7,8,9,0]) LIMIT 10;
If I need more than 100 dimensions, I might be able to achieve this using multiple indexed columns. Will update the answer in that case.
Update: Pretty sure there's nothing I can do with splitting the >100-dimension vector into multiple columns. I end up having to scan the entire table.
回答2:
If you're ok with an inexact solution, you can use random projection: https://en.wikipedia.org/wiki/Random_projection.
Randomly generate k
different vectors of the same length as your other vectors and store them somewhere. You will use these to spatially bin your data. For each vector in your table, do a dot product with each of the random vectors and store the product's sign.
Vectors with the same sign for each random vector go in the same bin, and generally vectors that have high cosine similarity will end up in the same bin. You can pack the signs as bits into an integer and use a normal index to pull vectors in the same bin as your query, then do a sequential search to find those with the highest cosine similarity.
来源:https://stackoverflow.com/questions/44792202/postgres-index-on-cosine-similarity-of-float-arrays-for-one-to-many-search