CUDA - Coalescing memory accesses and bus width

大兔子大兔子 提交于 2020-01-10 04:10:08

问题


So the idea that I have about coalescing memory accesses in CUDA is, that threads in a warp should access contiguous memory addresses, as that will only cause a single memory transaction (the values on each address are then broadcast to the threads) instead of multiple ones that would be performed in a serial manner.

Now, my bus width is 48 bytes. This means I can transfer 48 bytes on each memory transaction, right? So, in order to take full advantage of the bus, I would need to be able to read 48 bytes at a time (by reading more than one byte per thread - memory transactions are executed by a warp). However, hypothetically, wouldn't having a single thread reading 48 bytes at a time provide the same advantage (I'm assuming that I can read 48 bytes at a time by reading a structure whose size is 48 bytes)?

My problem with coalescing is the transposing that I have to do on the data. I have lots of data, so transposing it takes time that I would rather use for something else if I could.

I'm on Compute Capability 2.0.


回答1:


The memory bus of your GPU isn't simply 48 bytes wide (which would be quite cumbersome as it is not a power of 2). Instead, it is composed of 6 memory channels of 8 bytes (64 bits) each. Memory transactions are usually much wider than the channel width, in order to take advantage of the memory's burst mode. Good transaction sizes start from 64 bytes to produce a size-8 burst, which matches nicely with 16 32-bit words of a half-warp on compute capability 1.x devices.

128 byte wide transactions are still a bit faster, and match the warp-wide 32-bit word accesses of compute capability 2.0 (and higher) devices. Cache lines are also 128 bytes wide to match. Note that all of these accesses must be aligned on a multiple of the transaction width in order to map to a single memory transaction.

Now regarding your actual problem, the best thing probably is to do nothing and to let the cache sort it out. This works the same way as you would explicitly do in shared memory, just that it is done for you by the cache hardware and no code is needed for it, which should make it slightly faster. The only thing to worry about is to have enough cache available so that each warp can have the necessary 32×32×4 bytes = 4kbytes of cache for word wide (e.g. float) or 8kbytes for double accesses. This means that it can be beneficial to limit the number of warps that are active at the same time to prevent them from thrashing each other's cache lines.

For special optimizations there is also the possibility to use vector types like float2 orfloat4, as all CUDA capable GPUs have load and store instructions that map 8 or 16 bytes into the same thread. However on compute capability 2.0 and higher I don't really see any advantage of using them in the general matrix transpose case, as they increase the cache footprint of each warp even more.

As the default setting of 16kB cache / 48kB shared memory just allows for four warps per SM to perform the transpose at any one time (provided you have no other memory accesses at the same time), it is probably beneficial to select the 48kB cache / 16kB shared memory setting over the default 16kB/48kB split using cudaDeviceSetCacheConfig().

For completeness, I'll also mention that the warp shuffle instructions introduced with compute capability 3.0 allow to exchange register data within a warp without going through the cache or through shared memory. See Appendix B.14 of the CUDA C Programming Guide for details.
(Note that a version of the Programming Guide exists without this appendix. So if in your copy Appendix B.13 is about something else, reload it through the link provided).




回答2:


For purposes of coalescing, as you stated, you should focus on making the 32 threads in a warp access contiguous locations, preferably 32-byte or 128-byte aligned as well. Beyond that, don't worry about the physical address bus to the DRAM memory. The memory controller is composed of mostly independent partitions that are each 64bits wide. Your coalesced access coming out of the warp will be satisfied as quickly as possible by the memory controller. A single coalesced access for a full warp (32 threads) accessing an int or float will require 128 bytes to be retrieved anyway, i.e. multiple transactions on the physical bus to DRAM. When you are operating in caching mode, you can't really control the granularity of requests to global memory below 128 bytes at a time, anyway.

It's not possible to cause a single thread to request 48 bytes or anything like that in a single transaction. Even at the c code level if you think you are accessing a whole data structure at once, at the machine code level it gets converted to instructions that read 32 or 64 bits at a time.

If you feel that the caching restriction of 128 bytes at a time is penalizing your code, you can try running in uncached mode, which will reduce the granularity of global memory requests to 32 bytes at a time. If you have a scattered access pattern (not well coalesced) this option may give better performance.



来源:https://stackoverflow.com/questions/12589416/cuda-coalescing-memory-accesses-and-bus-width

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!