Access/synchronization to local memory

☆樱花仙子☆ 提交于 2020-01-17 06:22:29

问题


I'm pretty new to GPGPU programming. I'm trying to implement algorithm that needs lot of synchronization, so its using only one work-group (global and local size have the same value)

I have fallowing problem: my program is working correctly till size of problem exceeds 32.

__kernel void assort(
__global float *array,
__local float *currentOutput,
__local float *stimulations,
__local int *noOfValuesAdded,
__local float *addedValue,
__local float *positionToInsert,
__local int *activatedIdx,
__local float *range,
int size,
__global float *stimulationsOut
)
{
int id = get_local_id(0);
if (id == 0) {...}

barrier(CLK_LOCAL_MEM_FENCE);

for (int i = 2; i < size; i++) 
{
    int maxIdx;
    if (id == 0) 
   {
   addedValue[0] = array[i];
   {...}
   }
    barrier(CLK_LOCAL_MEM_FENCE);


    if (id < noOfValuesAdded[0]){...}
    else
        barrier(CLK_LOCAL_MEM_FENCE);
   barrier(CLK_LOCAL_MEM_FENCE);
   if (activatedIdx[0] == -2) {...}
   else {...}

   barrier(CLK_LOCAL_MEM_FENCE);
   if (positionToInsert[0] != -1.0f) {...}

    barrier(CLK_LOCAL_MEM_FENCE);
    stimulationsOut[id] = addedValue[0];
    return;
    }

After some investigation attemp I realized (by inspection of stimulationsOut), that addedValue[0] has diffrent value from 33rd instanction of the kernel, and then another value from 65th (so its like [123 123 123 ... 123 (33rd element) 66 ... 66 66 66 66 66 .. (65th element) 127 ... .. 127 ...])

__global float *array is READ_ONLY and I do not change addedValue[0] beside first if in for loop. What could couse this issue?

My GPU specs:[https://devtalk.nvidia.com/default/topic/521502/gt650m-a-kepler-part-/]

After commenting out of two if's body problem is not reccuring:

            /*if (activatedIdx[0] == -2) 
        {
            if (noOfValuesAdded[0] == 2) 
            {
                positionToInsert[0] = 0.99f;
            }
            else if (id != 0 && id != maxIdx 
                     && stimulations[id] >= stimulations[(id - 1)]
                     && stimulations[id] >= stimulations[(id + 1)]) 
           {
               if ((1.0f - (fabs((currentOutput[(id - 1)] -  currentOutput[id])) / range[0])) < stimulations[(id - 1)])
                    positionToInsert[0] = (float)id - 0.01f;
                    else
                positionToInsert[0] = (float)id + 0.99f;
            }
        }*/

and

    if (positionToInsert[0] != -1.0f) 
    {
        float temp = 0.0f;
        /*if ((float)id>positionToInsert[0]) 
        {
            temp = currentOutput[id];
            barrier(CLK_LOCAL_MEM_FENCE);
            currentOutput[id + 1] = temp;
        }
        else 
        {
            barrier(CLK_LOCAL_MEM_FENCE);
        }*/
        barrier(CLK_LOCAL_MEM_FENCE);

        if (id == round(positionToInsert[0])) 
        {
            currentOutput[id] = addedValue[0];
            noOfValuesAdded[0] = noOfValuesAdded[0] + 1;
        }
    }

Update: After fixing barriers, algorithm works properly until size exceeds 768 (which is weirdly 2 times numbers of cores on my gpu). I was expecting, that it will work up to 1024 elements, which is maximal work group size. Am I missing something?


回答1:


All work items in a warp execute the same instruction in lock-step. Warp size on Nvidia is 32 work items. If the kernel works correctly up to 32 work items this suggest there is something wrong with barriers.

Docs for barrier say:

All work-items in a work-group executing the kernel on a processor must execute this function before any are allowed to continue execution beyond the barrier.

I can see this being the issue in your kernel. For example here:

if ((float)id>positionToInsert[0]) 
{
    temp = currentOutput[id];
    barrier(CLK_LOCAL_MEM_FENCE); // <---- some work items may meet here
    currentOutput[id + 1] = temp;
}
else 
{
    barrier(CLK_LOCAL_MEM_FENCE); // <---- other work items may meet here
}

You could probably fix this by:

if ((float)id>positionToInsert[0]) 
    temp = currentOutput[id];
barrier(CLK_LOCAL_MEM_FENCE); // <---- here all work items meet at the same barrier
if ((float)id>positionToInsert[0]) 
    currentOutput[id + 1] = temp;



回答2:


After fixing barriers, algorithm works properly until size exceeds 768 (which is weirdly 2 times numbers of cores on my gpu). I was expecting, that it will work up to 1024 elements, which is maximal work group size. Am I missing something?



来源:https://stackoverflow.com/questions/41027783/access-synchronization-to-local-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!