How to pack bits (efficiently) in CUDA?

痴心易碎 提交于 2019-11-29 12:17:10

The __ballot() warp-voting function comes quite handy for this. Assuming that you can redefine pOutput to be of uint32_t type, and that your block size is a multiple of the warp size (32):

unsigned int target = __ballot(packing[tid]);
if (tid % warpSize == 0) {
    pOutput[(tid + blockDim.x*blockIdx.x) / warpSize] = target;
}

Strictly speaking, the if conditional isn't even necessary, as all threads of the warp will write the same data to the same address. So a highly optimized version would just be

pOutput[(tid + blockDim.x*blockIdx.x) / warpSize] = __ballot(packing[tid]);

For two bits per thread, using uint2 *pOutput

int lane = tid % warpSize;
uint2 target;
target.x = __ballot(__shfl(packing[tid], lane / 2)                & (lane & 1) + 1));
target.y = __ballot(__shfl(packing[tid], lane / 2 + warpSize / 2) & (lane & 1) + 1));
pOutput[(tid + blockDim.x*blockIdx.x) / warpSize] = target;

You'll have to benchmark whether this is still faster than your conventional solution.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!