Fast(er) numpy fancy indexing and reduction?

前端 未结 1 658
伪装坚强ぢ
伪装坚强ぢ 2020-12-17 18:23

I\'m trying to use and accelerate fancy indexing to \"join\" two arrays and sum over one of results\' axis.

Something like this:

$ ipython
In [1]: im         


        
相关标签:
1条回答
  • 2020-12-17 18:42

    numpy.take is much faster than fancy indexing for some reason. The only trick is that it treats the array as flat.

    In [1]: a = np.random.randn(12,6).astype(np.float32)
    
    In [2]: c = np.random.randint(0,6,size=(1e5,12)).astype(np.uint8)
    
    In [3]: r = np.arange(12)
    
    In [4]: %timeit a[r,c].sum(-1)
    10 loops, best of 3: 46.7 ms per loop
    
    In [5]: rr, cc = np.broadcast_arrays(r,c)
    
    In [6]: flat_index = rr*a.shape[1] + cc
    
    In [7]: %timeit a.take(flat_index).sum(-1)
    100 loops, best of 3: 5.5 ms per loop
    
    In [8]: (a.take(flat_index).sum(-1) == a[r,c].sum(-1)).all()
    Out[8]: True
    

    I think the only other way you're going to see much of a speed improvement beyond this would be to write a custom kernel for a GPU using something like PyCUDA.

    0 讨论(0)
提交回复
热议问题