Multiple processes launching CUDA kernels in parallel

后端 未结 3 1642
逝去的感伤
逝去的感伤 2021-02-04 07:02

I know that NVIDIA gpus with compute capability 2.x or greater can execute u pto 16 kernels concurrently. However, my application spawns 7 \"processes\" and each of these 7 proc

相关标签:
3条回答
  • 2021-02-04 07:28

    Do you really need to have separate threads and contexts? I believe that best practice is a usage one context per GPU, because multiple contexts on single GPU bring a sufficient overhead.

    To execute many kernels concrurrenlty you should create few CUDA streams in one CUDA context and queue each kernel into its own stream - so they will be executed concurrently, if there are enough resources for it.

    If you need to make the context accessible from few CPU threads - you can use cuCtxPopCurrent(), cuCtxPushCurrent() to pass them around, but only one thread will be able to work with the context at any time.

    0 讨论(0)
  • 2021-02-04 07:34

    A CUDA context is a virtual execution space that holds the code and data owned by a host thread or process. Only one context can ever be active on a GPU with all current hardware.

    So to answer your first question, if you have seven separate threads or processes all trying to establish a context and run on the same GPU simultaneously, they will be serialised and any process waiting for access to the GPU will be blocked until the owner of the running context yields. There is, to the best of my knowledge, no time slicing and the scheduling heuristics are not documented and (I would suspect) not uniform from operating system to operating system.

    You would be better to launch a single worker thread holding a GPU context and use messaging from the other threads to push work onto the GPU. Alternatively there is a context migration facility available in the CUDA driver API, but that will only work with threads from the same process, and the migration mechanism has latency and host CPU overhead.

    0 讨论(0)
  • 2021-02-04 07:46

    To add to the answer of @talonmies

    In the newer architectures, by the use of MPS multiple processes can launch multiple kernels concurrently. So, now it is definitely possible which was not sometime before. For a detailed understanding read this article.

    https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf

    Additionally, you can also see maximum number of concurrent kernels allowed per cuda compute capability type supported by different GPUs. Here is a link to that:

    https://en.wikipedia.org/wiki/CUDA#Version_features_and_specifications

    For example a GPU with cuda compute capability of 7.5 can have maximum of 128 Cuda kernels launched to it.

    0 讨论(0)
提交回复
热议问题