Why does `numpy.einsum` work faster with `float32` than `float16` or `uint16`? [duplicate]

我们两清 提交于 2019-12-01 11:18:50

问题


In my benchmark using numpy 1.12.0, calculating dot products with float32 ndarrays is much faster than the other data types:

In [3]: f16 = np.random.random((500000, 128)).astype('float16')
In [4]: f32 = np.random.random((500000, 128)).astype('float32')
In [5]: uint = np.random.randint(1, 60000, (500000, 128)).astype('uint16')

In [7]: %timeit np.einsum('ij,ij->i', f16, f16)
1 loop, best of 3: 320 ms per loop

In [8]: %timeit np.einsum('ij,ij->i', f32, f32)
The slowest run took 4.88 times longer than the fastest. This could mean that an intermediate result is being cached.
10 loops, best of 3: 19 ms per loop

In [9]: %timeit np.einsum('ij,ij->i', uint, uint)
10 loops, best of 3: 43.5 ms per loop

I've tried profiling einsum, but it just delegates all the computing to a C function, so I don't know what's the main reason for this performance difference.


回答1:


My tests with your f16 and f32 arrays shows that f16 is 5-10x slower for all calculations. It's only when doing byte level operations like array copy does more compact nature of float16 show any speed advantage.

https://gcc.gnu.org/onlinedocs/gcc/Half-Precision.html

Is the section in the gcc docs about half floats, fp16. With the right processor and right compiler switches, it may possible to install numpy in way that speeds up these calculations. We'd also have to check if numpy .h files have any provision for special handling of half floats.

Earlier questions, may be good enough to be duplicate references

Python Numpy Data Types Performance

Python numpy float16 datatype operations, and float8?



来源:https://stackoverflow.com/questions/44103815/why-does-numpy-einsum-work-faster-with-float32-than-float16-or-uint16

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!