Bit string nearest neighbour searching

前端 未结 2 659
走了就别回头了
走了就别回头了 2021-01-14 00:02

I have hundreds of thousands of sparse bit strings of length 32 bits.

I\'d like to do a nearest neighbour search on them and look-up performance is critical. I\'ve

相关标签:
2条回答
  • 2021-01-14 00:39

    I just came across a paper that addresses this problem.

    Randomized algorithms and NLP: using locality sensitive hash function for high speed noun clustering (Ravichandran et al, 2005)

    The basic idea is similar to Denis's answer (sort lexicographically by different permutations of the bits) but it includes a number of additional ideas and further references for articles on the topic.

    It is actually implemented in https://github.com/soundcloud/cosine-lsh-join-spark which is where I found it.

    0 讨论(0)
  • 2021-01-14 00:53

    Here's a fast and easy method, then a variant with better performance at the cost of more memory.

    In: array Uint X[], e.g. 1M 32-bit words
    Wanted: a function near( Uint q ) --> j with small hammingdist( q, X[j] )
    Method: binary search q in sorted X, then linear search a block around that.
    Pseudocode:

    def near( q, X, Blocksize=100 ):
        preprocess: sort X
        Uint* p = binsearch( q, X )  # match q in leading bits
        linear-search Blocksize words around p
        return the hamming-nearest of these.
    

    This is fast -- Binary search 1M words + nearest hammingdist in a block of size 100 takes < 10 us on my Mac ppc. (This is highly cache-dependent — your mileage will vary.)

    How close does this come to finding the true nearest X[j] ? I can only experiment, can't do the math:
    for 1M random queries in 1M random words, the nearest match is on average 4-5 bits away, vs. 3 away for the true nearest (linear scan all 1M):

    near32  N 1048576  Nquery 1048576  Blocksize 100 
    binary search, then nearest +- 50
    7 usec
    distance distribution: 0 4481 38137 185212  443211 337321 39979 235  0
    
    near32  N 1048576  Nquery 100  Blocksize 1048576 
    linear scan all 1048576
    38701 usec
    distance distribution: 0 0 7 58  35 0
    

    Run your data with blocksizes say 50 and 100 to see how the match distances drop.


    To get even nearer, at the cost of twice the memory,
    make a copy Xswap of X with upper / lower halfwords swapped, and return the better of

    near( q, X, Blocksize )
    near( swap q, Xswap, Blocksize )
    

    With lots of memory, one can use many more bit-shuffled copies of X, e.g. 32 rotations.
    I have no idea how performance varies with Nshuffle and Blocksize — a question for LSH theorists.


    (Added): To near-match bit strings of say 320 bits, 10 words, make 10 arrays of pointers, sorted on word 0, word 1 ... and search blocks with binsearch as above:

    nearest( query word 0, Sortedarray0, 100 ) -> min Hammingdist e.g. 42 of 320
    nearest( query word 1, Sortedarray1, 100 ) -> min Hammingdist 37
    nearest( query word 2, Sortedarray2, 100 ) -> min Hammingdist 50
    ...
    -> e.g. the 37.
    

    This will of course miss near-matches where no single word is close, but it's very simple, and sort and binsearch are blazingly fast. The pointer arrays take exactly as much space as the data bits. 100 words, 3200 bits would work in exactly the same way.
    But: this works only if there are roughly equal numbers of 0 bits and 1 bits, not 99 % 0 bits.

    0 讨论(0)
提交回复
热议问题