问题
I have two numpy arrays of the same length that contain binary values
import numpy as np
a=np.array([1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0])
b=np.array([1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1])
I want to compute the hamming distance between them as fast as possible since I have millions of such distance computations to make.
A simple but slow option is this (taken from wikipedia):
%timeit sum(ch1 != ch2 for ch1, ch2 in zip(a, b))
10000 loops, best of 3: 79 us per loop
I have come up with faster options, inspired by some answers here on stack overflow.
%timeit np.sum(np.bitwise_xor(a,b))
100000 loops, best of 3: 6.94 us per loop
%timeit len(np.bitwise_xor(a,b).nonzero()[0])
100000 loops, best of 3: 2.43 us per loop
I'm wondering if there are even faster ways to compute this, possibly using cython?
回答1:
There is a ready numpy function which beats len((a != b).nonzero()[0])
;)
np.count_nonzero(a!=b)
回答2:
Compared to 1.07µs for np.count_nonzero(a!=b) on my platform, gmpy2.hamdist gets its down to about 143ns after conversion of each array to an mpz (multiple-precison integer):
import numpy as np
from gmpy2 import mpz, hamdist, pack
a = np.array([1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0])
b = np.array([1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1])
Based on a tip from @casevh, conversion from a 1D array of ones and zeros to a gmpy2 mpz object can be done reasonably efficiently with gmpy2.pack(list(reversed(list(array))),1).
# gmpy2.pack reverses bit order but that does not affect
# hamdist since both its arguments are reversed
ampz = pack(list(a),1) # takes about 4.29µs
bmpz = pack(list(b),1)
hamdist(ampz,bmpz)
Out[8]: 7
%timeit hamdist(ampz,bmpz)
10000000 loops, best of 3: 143 ns per loop
for relative comparison, on my platform:
%timeit np.count_nonzero(a!=b)
1000000 loops, best of 3: 1.07 µs per loop
%timeit len((a != b).nonzero()[0])
1000000 loops, best of 3: 1.55 µs per loop
%timeit len(np.bitwise_xor(a,b).nonzero()[0])
1000000 loops, best of 3: 1.7 µs per loop
%timeit np.sum(np.bitwise_xor(a,b))
100000 loops, best of 3: 5.8 µs per loop
回答3:
Using pythran can bring extra benefit here:
$ cat hamm.py
#pythran export hamm(int[], int[])
from numpy import nonzero
def hamm(a,b):
return len(nonzero(a != b)[0])
As a reference (without pythran):
$ python -m timeit -s 'import numpy as np; a = np.random.randint(0,2, 100); b = np.random.randint(0,2, 100); from hamm import hamm' 'hamm(a,b)'
100000 loops, best of 3: 4.66 usec per loop
While after pythran compilation:
$ python -m pythran.run hamm.py
$ python -m timeit -s 'import numpy as np; a = np.random.randint(0,2, 100); b = np.random.randint(0,2, 100); from hamm import hamm' 'hamm(a,b)'
1000000 loops, best of 3: 0.745 usec per loop
That's roughly a 6x
speedup over the numpy implementation, as pythran skips the creation of an intermediate array when evaluating the element wise comparison.
I also measured:
def hamm(a,b):
return count_nonzero(a != b)
And I get 3.11 usec per loop
for the Python version and 0.427 usec per loop
with the Pythran one.
Disclaimer: I'm one of the Pythran dev.
来源:https://stackoverflow.com/questions/32730202/fast-hamming-distance-computation-between-binary-numpy-arrays