Can multiple processes share one CUDA context?

无人久伴 提交于 2020-05-15 09:26:21

问题


This question is a followup on Jason R's comment to Robert Crovellas answer on this original question ("Multiple CUDA contexts for one device - any sense?"):

When you say that multiple contexts cannot run concurrently, is this limited to kernel launches only, or does it refer to memory transfers as well? I have been considering a multiprocess design all on the same GPU that uses the IPC API to transfer buffers from process to process. Does this mean that effectively, only one process at a time has exclusive access to the entire GPU (not just particular SMs)? [...] How does that interplay with asynchronously-queued kernels/copies on streams in each process as far as scheduling goes?

Robert Crovella suggested asking this in a new question but it never happed, so let me do this here.


回答1:


Multi-Process Service is an alternative CUDA implementation by Nvidia that makes multiple processes use the same context. This e.g. allows kernels from multiple processes to run in parallel if each of them does not fill the entire GPU by itself.



来源:https://stackoverflow.com/questions/58747321/can-multiple-processes-share-one-cuda-context

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!