gpudirect

How to use GPUDirect RDMA with Infiniband

若如初见. 提交于 2019-12-18 06:55:16
问题 I have two machines. There are multiple Tesla cards on each machine. There is also an InfiniBand card on each machine. I want to communicate between GPU cards on different machines through InfiniBand. Just point to point unicast would be fine. I surely want to use GPUDirect RDMA so I could spare myself of extra copy operations. I am aware that there is a driver available now from Mellanox for its InfiniBand cards. But it doesn't offer a detailed development guide. Also I am aware that OpenMPI

Can I use GPUDirect v2 Peer-to-Peer communication between two Quadro K1100M or two GeForce GT 745M?

依然范特西╮ 提交于 2019-12-11 08:46:04
问题 Can I use GPUDirect v2 - Peer-to-Peer communication on a single PCIe-Bus?: between two: Mobile nVidia Quadro K1100M between two: Mobile nVidia GeForce GT 745M 回答1: In general, if you want to find out if GPUDirect Peer to Peer is supported between two GPUs, you can run the simple P2P CUDA sample code or in your own code, you can test the availability with the cudaCanAccessPeer runtime API call Note that in general, P2P support may vary by GPU or GPU family. The ability to run P2P on one GPU

CUDA: GPUDirect on GeForce GTX 690

匆匆过客 提交于 2019-12-11 04:52:52
问题 The GeForce GTX 690 (from vendors like Zotac and EVGA) can be used for CUDA programming, much like a Tesla K10. Question: Does the GeForce GTX 690 support GPUDirect? Specifically: If I were to use two GTX 690 cards, I would have 4 GPUs (two GPUs within each card). If I connect both GTX 690 cards to the same PCIe switch, will GPUDirect work well for communication between any pair of the 4 GPUs? Thanks. 回答1: According to the requirements stated here it is necessary to have Tesla series GPUs. So

Does AMD's OpenCL offer something similar to CUDA's GPUDirect?

点点圈 提交于 2019-11-30 07:06:20
NVIDIA offers GPUDirect to reduce memory transfer overheads. I'm wondering if there is a similar concept for AMD/ATI? Specifically: 1) Do AMD GPUs avoid the second memory transfer when interfacing with network cards, as described here . In case the graphic is lost at some point, here is a description of the impact of GPUDirect on getting data from a GPU on one machine to be transferred across a network interface: With GPUDirect, GPU memory goes to Host memory then straight to the network interface card. Without GPUDirect, GPU memory goes to Host memory in one address space, then the CPU has to

How to use GPUDirect RDMA with Infiniband

我只是一个虾纸丫 提交于 2019-11-29 11:35:01
I have two machines. There are multiple Tesla cards on each machine. There is also an InfiniBand card on each machine. I want to communicate between GPU cards on different machines through InfiniBand. Just point to point unicast would be fine. I surely want to use GPUDirect RDMA so I could spare myself of extra copy operations. I am aware that there is a driver available now from Mellanox for its InfiniBand cards. But it doesn't offer a detailed development guide. Also I am aware that OpenMPI has support for the feature I am asking. But OpenMPI is too heavy weight for this simple task and it

Does AMD's OpenCL offer something similar to CUDA's GPUDirect?

 ̄綄美尐妖づ 提交于 2019-11-29 08:17:31
问题 NVIDIA offers GPUDirect to reduce memory transfer overheads. I'm wondering if there is a similar concept for AMD/ATI? Specifically: 1) Do AMD GPUs avoid the second memory transfer when interfacing with network cards, as described here. In case the graphic is lost at some point, here is a description of the impact of GPUDirect on getting data from a GPU on one machine to be transferred across a network interface: With GPUDirect, GPU memory goes to Host memory then straight to the network