Effect of using page-able memory for asynchronous memory copy?

前端 未结 1 1389
鱼传尺愫
鱼传尺愫 2021-02-04 11:53

In CUDA C Best Practices Guide Version 5.0, Section 6.1.2, it is written that:

In contrast with cudaMemcpy(), the asynchronous transfer version requires

相关标签:
1条回答
  • 2021-02-04 12:00

    cudaMemcpyAsync is fundamentally an asynchronous version of cudaMemcpy. This means that it doesn't block the calling host thread when the copy call is issued. That is the basic behaviour of the call.

    Optionally, if the call is launched into the non default stream, and if the host memory is a pinned allocation, and the device has a free DMA copy engine, the copy operation can happen while the GPU simultaneously performs another operation: either kernel execution or another copy (in the case of a GPU with two DMA copy engines). If any of these conditions are not satisfied, the operation on the GPU is functionally identical to a standard cudaMemcpy call, ie. it serialises operations on the GPU, and no simultaneous copy-kernel execution or simultaneous multiple copies can occur. The only difference is that the operation doesn't block the calling host thread.

    In your example code, the host source and destination memory are not pinned. So the memory transfer cannot overlap with kernel execution (ie. they serialise operations on the GPU). The calls are still asynchronous on the host. So what you have is functionally equivalent to:

    cudaMemcpy(dPtr1,hPtr1,bytes,cudaMemcpyHostToDevice);
    kernel_increment<<<grid,block>>>(dPtr1,dPtr2,n);
    cudaMemcpy(hPtr2,dPtr2,bytes,cudaMemcpyDeviceToHost);
    

    with the exception that all the calls are asynchronous on the host, so the host thread blocks at the cudaDeviceSynchronize() call rather than at each of the memory transfer calls.

    This is absolutely expected behaviour.

    0 讨论(0)
提交回复
热议问题