Why is half-precision complex float arithmetic not supported in Python and CUDA?

左心房为你撑大大i 提交于 2021-01-03 13:50:44

问题


NumPY has complex64 corresponding to two float32's.

But it also has float16's but no complex32.

How come? I have signal processing calculation involving FFT's where I think I'd be fine with complex32, but I don't see how to get there. In particular I was hoping for speedup on NVidia GPU with cupy.

However it seems that float16 is slower on GPU rather than faster.

Why is half-precision unsupported and/or overlooked?

Also related is why we don't have complex integers, as this may also present an opportunity for speedup.


回答1:


This issue has been raised in the CuPy repo for some time:

https://github.com/cupy/cupy/issues/3370

But there's no concrete work plan yet; most of the things are still of explorative nature.

One of the reasons that it's not trivial to work out is that there's no numpy.complex32 dtype that we can directly import (note that all CuPy's dtypes are just alias of NumPy's), so there'd be problems when a device-host transfer is asked. The other thing is there's no native mathematical functions written either on CPU or GPU for complex32, so we will need to write them all ourselves to do casting, ufunc, and what not. In the linked issue there is a link to a NumPy discussion, and my impression is it's currently not being considered...



来源:https://stackoverflow.com/questions/56777807/why-is-half-precision-complex-float-arithmetic-not-supported-in-python-and-cuda

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!