Difference between @cuda.jit and @jit(target='gpu')

百般思念 提交于 2021-02-07 09:13:00

问题


I have a question on working with Python CUDA libraries from Continuum's Accelerate and numba packages. Is using the decorator @jit with target = gpu the same as @cuda.jit?


回答1:


No, they are not the same, although the eventual compilation path into PTX into assembler is. The @jit decorator is the general compiler path, which can be optionally steered onto a CUDA device. The @cuda.jit decorator is effectively the low level Python CUDA kernel dialect which Continuum Analytics have developed. So you get support for CUDA built-in variables like threadIdx and memory space specifiers like __shared__ in @cuda.jit.

If you want to write a CUDA kernel in Python and compile and run it, use @cuda.jit. Otherwise, if you want to accelerate an existing piece of Python use @jit with a CUDA target.



来源:https://stackoverflow.com/questions/35890045/difference-between-cuda-jit-and-jittarget-gpu

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!