OpenCL - Multiple GPU Buffer Synchronization

后端 未结 1 1160
梦毁少年i
梦毁少年i 2021-01-06 13:48

I have an OpenCL kernel that calculates total force on a particle exerted by other particles in the system, and then another one that integrates the particle position/veloci

相关标签:
1条回答
  • 2021-01-06 14:08

    It sounds like you are having implementation trouble.

    There's a great presentation from SIGGRAPH that shows a few different ways to utilize multiple GPUs with shared memory. The slides are here.

    I imagine that, in your current setup, you have a single context containing multiple devices with multiple command queues. This is probably the right way to go, for what you're doing.

    Appendix A of the OpenCL 1.2 specification says that:

    OpenCL memory objects, [...] are created using a context and can be shared across multiple command-queues created using the same context.

    Further:

    The application needs to implement appropriate synchronization across threads on the host processor to ensure that the changes to the state of a shared object [...] happen in the correct order [...] when multiple command-queues in multiple threads are making changes to the state of a shared object.

    So it would seem to me that your kernel that calculates particle position and velocity needs to depend on your kernel that calculates the inter-particle forces. It sounds like you already know that.

    To put things more in terms of your question:

    What is the best way to synchronize buffers across GPUs, given that each GPU has a different buffer?

    ... I think the answer is "don't have the buffers be separate." Use the same cl_mem object between two devices by having that cl_mem object come from the same context.

    As for where the data actually lives... as you pointed out, that's implementation-defined (at least as far as I can tell from the spec). You probably shouldn't worry about where the data is living, and just access the data from both command queues.

    I realize this could create some serious performance concerns. Implementations will likely evolve and get better, so if you write your code according to the spec now, it'll probably run better in the future.

    Another thing you could try in order to get a better (or a least different) buffer-sharing behavior would be to make the particle data a map.

    If it's any help, our setup (a bunch of nodes with dual C2070s) seem to share buffers fairly optimally. Sometimes, the data is kept on only one device, other times it might have the data exist in both places.

    All in all, I think the answer here is to do it in the best way the spec provides and hope for the best in terms of implementation.

    I hope I was helpful,

    Ryan

    0 讨论(0)
提交回复
热议问题