When is CUDA's __shared__ memory useful?

前端 未结 2 1761
眼角桃花
眼角桃花 2020-12-29 03:13

Can someone please help me with a very simple example on how to use shared memory? The example included in the Cuda C programming guide seems cluttered by irrelevant details

相关标签:
2条回答
  • 2020-12-29 03:32

    Think of shared memory as an explicitly managed cache - it's only useful if you need to access data more than once, either within the same thread or from different threads within the same block. If you're only accessing data once then shared memory isn't going to help you.

    0 讨论(0)
  • 2020-12-29 03:47

    In the specific case you mention, shared memory is not useful, for the following reason: each data element is used only once. For shared memory to be useful, you must use data transferred to shared memory several times, using good access patterns, to have it help. The reason for this is simple: just reading from global memory requires 1 global memory read and zero shared memory reads; reading it into shared memory first would require 1 global memory read and 1 shared memory read, which takes longer.

    Here's a simple example, where each thread in the block computes the corresponding value, squared, plus the average of both its left and right neighbors, squared:

      __global__ void compute_it(float *data)
      {
         int tid = threadIdx.x;
         __shared__ float myblock[1024];
         float tmp;
    
         // load the thread's data element into shared memory
         myblock[tid] = data[tid];
    
         // ensure that all threads have loaded their values into
         // shared memory; otherwise, one thread might be computing
         // on unitialized data.
         __syncthreads();
    
         // compute the average of this thread's left and right neighbors
         tmp = (myblock[tid > 0 ? tid - 1 : 1023] + myblock[tid < 1023 ? tid + 1 : 0]) * 0.5f;
         // square the previousr result and add my value, squared
         tmp = tmp*tmp + myblock[tid] * myblock[tid];
    
         // write the result back to global memory
         data[tid] = tmp;
      }
    

    Note that this is envisioned to work using only one block. The extension to more blocks should be straightforward. Assumes block dimension (1024, 1, 1) and grid dimension (1, 1, 1).

    0 讨论(0)
提交回复
热议问题