how can a __global__ function RETURN a value or BREAK out like C/C++ does

后端 未结 3 1807
夕颜
夕颜 2020-12-28 20:37

Recently I\'ve been doing string comparing jobs on CUDA, and i wonder how can a __global__ function return a value when it finds the exact string that I\'m looking for.

相关标签:
3条回答
  • 2020-12-28 21:19

    The global function doesn't really contain a great amount of threads like you think it does. It is simply a kernel, function that runs on device, that is called by passing paramaters that specify the thread model. The model that CUDA employs is a 2D grid model and then a 3D thread model inside of each block on the grid.

    With the type of problem you have it is not really necessary to use anything besides a 1D grid with 1D of threads on in each block because the string pool doesn't really make sense to split into 2D like other problems (e.g. matrix multiplication)

    I'll walk through a simple example of say 100 strings in the string pool and you want them all to be checked in a parallelized fashion instead of sequentially.

    //main
    //Should cudamalloc and cudacopy to device up before this code
    dim3 dimGrid(10, 1); // 1D grid with 10 blocks
    dim3 dimBlocks(10, 1); //1D Blocks with 10 threads 
    fun<<<dimGrid, dimBlocks>>>(, Height)
    //cudaMemCpy answerIdx back to integer on host
    
    //kernel (Not positive on these types as my CUDA is very rusty
    __global__ void fun(char *strings[], char *stringToMatch, int *answerIdx)
    {
        int idx = blockIdx.x * 10 + threadIdx.x;
    
        //Obviously use whatever function you've been using for string comparison
        //I'm just using == for example's sake
        if(strings[idx] == stringToMatch)
        { 
           *answerIdx = idx
        }
    } 
    

    This is obviously not the most efficient and is most likely not the exact way to pass paramaters and work with memory w/ CUDA, but I hope it gets the point across of splitting the workload and that the 'global' functions get executed on many different cores so you can't really tell them all to stop. There may be a way I'm not familiar with, but the speed up you will get by just dividing the workload onto the device (in a sensible fashion of course) will already give you tremendous performance improvements. To get a sense of the thread model I highly recommend reading up on the documents on Nvidia's site for CUDA. They will help tremendously and teach you the best way to set up the grid and blocks for optimal performance.

    0 讨论(0)
  • 2020-12-28 21:28

    There is no way in CUDA (or on NVIDIA GPUs) for one thread to interrupt execution of all running threads. You can't have immediate exit of the kernel as soon as a result is found, it's just not possible today.

    But you can have all threads exit as soon as possible after one thread finds a result. Here's a model of how you would do that.

    __global___ void kernel(volatile bool *found, ...) 
    {
        while (!(*found) && workLeftToDo()) {
    
           bool iFoundIt = do_some_work(...); // see notes below
    
           if (iFoundIt) *found = true;
        }
    }
    

    Some notes on this.

    1. Note the use of volatile. This is important.
    2. Make sure you initialize found—which must be a device pointer—to false before launching the kernel!
    3. Threads will not exit instantly when another thread updates found. They will exit only the next time they return to the top of the while loop.
    4. How you implement do_some_work matters. If it is too much work (or too variable), then the delay to exit after a result is found will be long (or variable). If it is too little work, then your threads will be spending most of their time checking found rather than doing useful work.
    5. do_some_work is also responsible for allocating tasks (i.e. computing/incrementing indices), and how you do that is problem specific.
    6. If the number of blocks you launch is much larger than the maximum occupancy of the kernel on the present GPU, and a match is not found in the first running "wave" of thread blocks, then this kernel (and the one below) can deadlock. If a match is found in the first wave, then later blocks will only run after found == true, which means they will launch, then exit immediately. The solution is to launch only as many blocks as can be resident simultaneously (aka "maximal launch"), and update your task allocation accordingly.
    7. If the number of tasks is relatively small, you can replace the while with an if and run just enough threads to cover the number of tasks. Then there is no chance for deadlock (but the first part of the previous point applies).
    8. workLeftToDo() is problem-specific, but it would return false when there is no work left to do, so that we don't deadlock in the case that no match is found.

    Now, the above may result in excessive partition camping (all threads banging on the same memory), especially on older architectures without L1 cache. So you might want to write a slightly more complicated version, using a shared status per block.

    __global___ void kernel(volatile bool *found, ...) 
    {
        volatile __shared__ bool someoneFoundIt;
    
        // initialize shared status
        if (threadIdx.x == 0) someoneFoundIt = *found;
        __syncthreads();
    
        while(!someoneFoundIt && workLeftToDo()) {
    
           bool iFoundIt = do_some_work(...); 
    
           // if I found it, tell everyone they can exit
           if (iFoundIt) { someoneFoundIt = true; *found = true; }
    
           // if someone in another block found it, tell 
           // everyone in my block they can exit
           if (threadIdx.x == 0 && *found) someoneFoundIt = true;
    
           __syncthreads();
        }
    }
    

    This way, one thread per block polls the global variable, and only threads that find a match ever write to it, so global memory traffic is minimized.

    Aside: __global__ functions are void because it's difficult to define how to return values from 1000s of threads into a single CPU thread. It is trivial for the user to contrive a return array in device or zero-copy memory which suits his purpose, but difficult to make a generic mechanism.

    Disclaimer: Code written in browser, untested, unverified.

    0 讨论(0)
  • 2020-12-28 21:33

    If you feel adventurous, an alternative approach to stopping kernel execution would be to just execute

    // (write result to memory here)
    __threadfence();
    asm("trap;");
    

    if an answer is found.

    This doesn't require polling memory, but is inferior to the solution that Mark Harris suggested in that it makes the kernel exit with an error condition. This may mask actual errors (so be sure to write out your results in a way that clearly allows to tell a successful execution from an error), and it may cause other hiccups or decrease overall performance as the driver treats this as an exception.

    If you look for a safe and simple solution, go with Mark Harris' suggestion instead.

    0 讨论(0)
提交回复
热议问题