Persistent threads in OpenCL and CUDA

后端 未结 2 688
臣服心动
臣服心动 2020-12-28 22:55

I have read some papers talking about \"persistent threads\" for GPGPU, but I don\'t really understand it. Can any one give me an example or show me the use of this programm

相关标签:
2条回答
  • 2020-12-28 23:31

    CUDA exploits the Single Instruction Multiple Data (SIMD) programming model. The computational threads are organized in blocks and the thread blocks are assigned to a different Streaming Multiprocessor (SM). The execution of a thread block on a SM is performed by arranging the threads in warps of 32 threads: each warp operates in lock-step and executes exactly the same instruction on different data.

    Generally, to fill up the GPU, the kernel is launched with much more blocks that can actually be hosted on the SMs. Since not all the blocks can be hosted on a SM, a work scheduler performs a context switch when a block has finished computing. It should be noticed that the switching of the blocks is managed entirely in hardware by the scheduler, and the programmer has no means of influencing how blocks are scheduled onto the SM. This exposes a limitation for all those algorithms that do not perfectly fit a SIMD programming model and for which there is work imbalance. Indeed, a block A will not be replaced by another block B on the same SM until the last thread of block A will not have finished to execute.

    Although CUDA does not expose the hardware scheduler to the programmer, the persistent threads style bypasses the hardware scheduler by relying on a work queue. When a block finishes, it checks the queue for more work and continues doing so until no work is left, at which point the block retires. In this way, the kernel is launched with as many blocks as the number of available SMs.

    The persistent threads technique is better illustrated by the following example, which has been taken from the presentation

    “GPGPU” computing and the CUDA/OpenCL Programming Model

    Another more detailed example is available in the paper

    Understanding the efficiency of ray traversal on GPUs

    // Persistent thread: Run until work is done, processing multiple work per thread
    // rather than just one. Terminates when no more work is available
    
    // count represents the number of data to be processed
    
    __global__  void persistent(int* ahead, int* bhead, int count, float* a, float* b)
    {
        int local_input_data_index, local_output_data_index;
    while ((local_input_data_index = read_and_increment(ahead)) <   count)
    {                                   
            load_locally(a[local_input_data_index]);
    
            do_work_with_locally_loaded_data();
    
            int out_index = read_and_increment(bhead);
    
            write_result(b[out_index]);
        }
    }
    
    // Launch exactly enough threads to fill up machine (to achieve sufficient parallelism 
    // and latency hiding)
    persistent<<numBlocks,blockSize>>(ahead_addr, bhead_addr, total_count, A, B);
    
    0 讨论(0)
  • 2020-12-28 23:45

    Quite easy to understand. Usually each work item processed a small amount of work. If you want to save save workgroup switch time, you can let one work item process a lot of work using a loop. For instance, you have one image, and it is 1920x1080, you have 1920 workitem, and each work item processes one column of 1080 pixels using loop.

    0 讨论(0)
提交回复
热议问题