Fermi L2 cache hit latency?

前端 未结 2 1053
感情败类
感情败类 2021-01-06 09:00

Does anyone know related information about L2 cache in Fermi? I have heard that it is as slow as global memory, and the use of L2 is just to enlarge the memory bandwidth. Bu

相关标签:
2条回答
  • 2021-01-06 09:10

    This thread in the nvidia has some measurements for performance characteristica. While it is not official information, and probably not 100% exact, it gives at least some indication for the behaviour, so I thought it might be useful here (measurements in clockcycles):

    1020 non-cached (L1 enabled but not used)

    1020 non-cached (L1 disabled)

    365 L2 cached (L1 disabled)

    88 L1 cached (L1 enabled and used)

    Another post in the same thread gives those results:

    1060 non-cached

    248 L2

    18 L1

    0 讨论(0)
  • 2021-01-06 09:29

    It is not just as slow as global memory. I don't have a source explicitly saying that but on the CUDA programming guide it says "A cache line request is serviced at the throughput of L1 or L2 cache in case of a cache hit, or at the throughput of device memory, otherwise." so they should be different for this to make any sense and why would NVIDIA put a cache with the same speed of global memory? It would be worse on average because of cache misses.

    About the latency I don't know. The size of the L2 cache is 768KB, the line size is 128 bytes. Section F4 of the CUDA programming guide has some more bits of information, specially section F4.1 and F4.2. The guide is available here http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Programming_Guide.pdf

    0 讨论(0)
提交回复
热议问题