Speed of Pascal CUDA8 1080Ti unified memory

后端 未结 2 733
無奈伤痛
無奈伤痛 2021-01-15 14:42

Thanks to the answers here yesterday, I think I now have a correct basic test of unified memory using Pascal 1080Ti. It allocates a 50GB single dimension array and adds it u

相关标签:
2条回答
  • 2021-01-15 15:15

    In this blogpost from Nov 2013: https://devblogs.nvidia.com/parallelforall/unified-memory-in-cuda-6/ NVIDIA writes

    An important point is that a carefully tuned CUDA program that uses streams and cudaMemcpyAsync to efficiently overlap execution with data transfers may very well perform better than a CUDA program that only uses Unified Memory. Understandably so: the CUDA runtime never has as much information as the programmer does about where data is needed and when! CUDA programmers still have access to explicit device memory allocation and asynchronous memory copies to optimize data management and CPU-GPU concurrency. Unified Memory is first and foremost a productivity feature that provides a smoother on-ramp to parallel computing, without taking away any of CUDA’s features for power users.

    Also in March 2014: https://devblogs.nvidia.com/parallelforall/cudacasts-episode-18-cuda-6-0-unified-memory/

    CUDA 6 introduces Unified Memory, which dramatically simplifies memory management for GPU computing. Now you can focus on writing parallel kernels when porting code to the GPU, and memory management becomes an optimization.

    Now, in CUDA 8 there were some improvements to Unified Memory mechanism https://devblogs.nvidia.com/parallelforall/cuda-8-features-revealed/. In particular, they say:

    An important point is that CUDA programmers still have the tools they need to explicitly optimize data management and CPU-GPU concurrency where necessary: CUDA 8 introduces useful APIs for providing the runtime with memory usage hints (cudaMemAdvise()) and for explicit prefetching (cudaMemPrefetchAsync()). These tools allow the same capabilities as explicit memory copy and pinning APIs without reverting to the limitations of explicit GPU memory allocation.

    So it appears that your example may be sped up using cudaMemAdvise() / cudaMemPrefetch(). However even with this, explicit memory management may still have a performance edge.

    Added by OP :

    Performance through data locality By migrating data on demand between the CPU and GPU, Unified Memory can offer the performance of local data on the GPU, while providing the ease of use of globally shared data. The complexity of this functionality is kept under the covers of the CUDA driver and runtime, ensuring that application code is simpler to write. The point of migration is to achieve full bandwidth from each processor; the 750 GB/s of HBM2 memory bandwidth is vital to feeding the compute throughput of a GP100 GPU. With page faulting on GP100, locality can be ensured even for programs with sparse data access, where the pages accessed by the CPU or GPU cannot be known ahead of time, and where the CPU and GPU access parts of the same array allocations simultaneously.

    and

    Pascal also improves support for Unified Memory thanks to a larger virtual address space and a new page migration engine, enabling higher performance, oversubscription of GPU memory, and system-wide atomic memory operations.

    0 讨论(0)
  • 2021-01-15 15:28

    The page faulting process is clearly more complicated than a pure copy of data. As a result, when you drive data to the GPU by page-faulting, it cannot compete performance-wise with a pure copy of the data.

    Page faulting essentially introduces another kind of latency for the GPU to deal with. The GPU is a latency-hiding machine, but it needs for the programmer to give it the opportunity to hide latency. This can be roughly described as exposing enough parallel work.

    On the surface of it, you seem to have exposed a lot of parallel work (~12B elements in your dataset). But the work intensity per byte or element retrieved is quite small, so as a result the GPU still has limited opportunity to hide the latency associated with page-faulting here. Stated another way, the GPU has an instantaneous capacity to perform latency hiding based on the maximum complement of threads that can be in flight on that GPU (upper bound: 2048 * # of SMs), and the work exposed in each thread. Unfortunately, the work exposed in each thread in your example could be trivially small - a single addition, basically.

    One of the ways to help with GPU latency hiding is increasing the work per thread, and there are various techniques to do this. A good starting point would be to choose an algorithm (if possible) that has a high compute complexity. Matrix-matrix multiply is the classical example of large compute complexity per element of data.

    Some suggestions in this case would be to recognize that what you are trying to do is quite orderly, and therefore not that difficult to manage from a programming point of view, by breaking up the work into pieces and managing the data transfer yourself. This will allow you to achieve the full bandwidth of the link for data transfer operations, achieve approximately full utilization of the host->device bandwidth, and (to a very small extent for this example) overlap of copy and compute. For such a straightforward and easily decomposable problem such as this, it makes sense for the programmer not to use UM/oversubscription/page-faulting.

    The place where this methodology (UM/oversubscription/page-faulting) may shine, for example, would be an algorithm where it's difficult for the programmer to predict the access pattern ahead of time. Traversal of a large graph (which cannot all be in GPU memory at once) might be an example. If you had a graph traversal problem with a large amount of work for each edge traversal, then the cost as you page-fault hopping node-to-node in the graph might not be a big deal, and simplification of the programming effort (not having to manage graph data movement explicitly) might be worth the cost.

    Regarding pre-fetching, it's questionable, whether it would be of much use here, even if it were available. Prefetching still essentially depends on having something else to do while the prefetch request is in flight. When you have such a low amount of work per data item to be processed, it's not clear that a clever prefetching scheme would really provide much benefit for this example. We can imagine possibly clever, complicated prefetching strategies, but such effort is probably better spent just crafting a partitioned explicit data transfer system for such a problem as this.

    0 讨论(0)
提交回复
热议问题