问题
I have a question on working with Python CUDA libraries from Continuum's Accelerate and numba packages. Is using the decorator @jit
with target = gpu
the same as @cuda.jit
?
回答1:
No, they are not the same, although the eventual compilation path into PTX into assembler is. The @jit
decorator is the general compiler path, which can be optionally steered onto a CUDA device. The @cuda.jit
decorator is effectively the low level Python CUDA kernel dialect which Continuum Analytics have developed. So you get support for CUDA built-in variables like threadIdx
and memory space specifiers like __shared__
in @cuda.jit
.
If you want to write a CUDA kernel in Python and compile and run it, use @cuda.jit
. Otherwise, if you want to accelerate an existing piece of Python use @jit
with a CUDA target.
来源:https://stackoverflow.com/questions/35890045/difference-between-cuda-jit-and-jittarget-gpu