Using maximum shared memory in Cuda

后端 未结 1 746
轻奢々
轻奢々 2020-12-20 09:20

I am unable to use more than 48K of shared memory (on V100, Cuda 10.2)

I call

cudaFuncSetAttribute(my_kernel,
                     cudaFuncAttributePre         


        
相关标签:
1条回答
  • 2020-12-20 10:04

    from here:

    Compute capability 7.x devices allow a single thread block to address the full capacity of shared memory: 96 KB on Volta, 64 KB on Turing. Kernels relying on shared memory allocations over 48 KB per block are architecture-specific, as such they must use dynamic shared memory (rather than statically sized arrays) and require an explicit opt-in using cudaFuncSetAttribute() as follows:

    cudaFuncSetAttribute(my_kernel, cudaFuncAttributeMaxDynamicSharedMemorySize, 98304);
    

    When I add that line to the code you have shown, the invalid value error goes away. For a Turing device, you would want to change that number from 98304 to 65536. And of course 65536 would be sufficient for your example as well, although not sufficient to use the maximum available on volta, as stated in the question title.

    In a similar fashion kernels on Ampere devices should be able to use up to 160KB of shared memory, dynamically allocated, using the above opt-in mechanism, with the number 98304 changed to 163840.

    Note that the above covers the Volta (7.0) Turing (7.5) and Ampere (8.x) cases. GPUs with compute capability prior to 7.x have no ability to address more than 48KB per threadblock. In some cases, these GPUs may have more shared memory per multiprocessor, but this is provided to allow for greater occupancy in certain threadblock configurations. The programmer has no ability to use more than 48KB per threadblock.

    0 讨论(0)
提交回复
热议问题