Access Path in Zero-Copy in OpenCL

后端 未结 1 1296
广开言路
广开言路 2020-12-30 14:55

I am a little bit confused about how exactly zero-copy work.

1- Want to confirm that the following corresponds to zero-copy in opencl.

 ...........         


        
相关标签:
1条回答
  • 2020-12-30 15:53

    You are correct in your understanding of how zero-copy works. The basic premise is that you can access either the host memory from the device, or the device memory from the host without needing to do an intermediate buffering step in between.

    You can perform zero-copy by creating buffers with the following flags:

    CL_MEM_AMD_PERSISTENT_MEM //Device-Resident Memory
    CL_MEM_ALLOC_HOST_PTR // Host-Resident Memory
    

    Then, the buffers can be accessed using memory mapping semantics:

    void* p = clEnqueueMapBuffer(queue, buffer, CL_TRUE, CL_MAP_WRITE, 0, size, 0, NULL, NULL, &err);
    //Perform writes to the buffer p
    err = clEnqueueUnmapMemObject(queue, buffer, p, 0, NULL, NULL);
    

    Using zero-copy you could be able to achieve performance over an implementation that did the following:

    1. Copy a file to a host buffer
    2. Copy buffer to the device

    Instead you could do it all in one step

    1. Memory Map device side buffer
    2. Copy file from host to device
    3. Unmap memory

    On some implementations, the calls of mapping and unmapping can hide the cost of data transfer. As in our example,

    1. Memory Map device side buffer [Actually creates a host-side buffer of the same size]
    2. Copy file from host to device [Actually writes to the host-side buffer]
    3. Unmap memory [Actually copies data from host-buffer to device-buffer via clEnqueueWriteBuffer]

    If the implementation is performing this way, then there will be no benefit to using the mapping approach. However, AMDs newer drivers for OpenCL allow the data to be written directly, making the cost of mapping and unmapping almost 0. For discrete graphics cards, the requests still take place over the PCIe bus, so data transfers can be slow.

    In the case of an APU architecture, however, the costs of data transfers using the zero-copy semantics can greatly increase the speed of transfers due to the APUs unique architecture (pictured below). In this architecture, the PCIe bus is replaced with the Unified North Bridge (UNB) that allows for faster transfers.

    BE AWARE that when using zero-copy semantics with the memory-mapping, that you will see absolutely horrendous bandwidths when reading a device-side buffer from the host. These bandwidths are on the order of 0.01 Gb/s and can easily become a new bottleneck for your code.

    Sorry if this is too much information. This was my thesis topic.

    APU Architecture

    0 讨论(0)
提交回复
热议问题