Why we do not have access to device memory on host side?

血红的双手。 提交于 2019-12-13 06:21:41

问题


I asked a question Memory allocated using cudaMalloc() is accessable by host or not? though the things are much clear to me now, but I am still wondering why it is not possible to access the device pointer in host. My understanding is that the CUDA driver takes care of memory allocation inside GPU DRAM. So this information (that what is my first address of allocated memory in device), can be conveyed to the OS running on host. Then it can be possible to access this device pointer i.e the first address of the allocated device memory. What is wrong with my understanding ? Kindly help me to understand this. Thanks you


回答1:


The GPU memory lives on the other side of the PCIE bus. The memory controller for the host memory in modern PC architectures is directly attached to the CPU.

Therefore the access methods are quite a bit different. When accessing memory that is on the GPU, the transaction must be framed as a sequence of PCIE cycles. The activity of setting up the PCIE bus to effect this transaction is not built into an ordinary memory fetch cycle in a modern CPU.

Therefore we require software interaction (handled by cudaMemcpy) to complete the software sequence that will program cycles on the PCIE bus to either send or fetch data that is on the other side of the bus.



来源:https://stackoverflow.com/questions/19193159/why-we-do-not-have-access-to-device-memory-on-host-side

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!