Other reasons for instruction replays in CUDA

南楼画角 提交于 2020-02-05 07:22:28

问题


This is the output I get from nvprof (CUDA 5.5):

Invocations                 Metric Name              Metric Description         Min         Max         Avg
Device "Tesla K40c (0)"
Kernel: MyKernel(double const *, double const *, double*, int, int, int)
     60            inst_replay_overhead     Instruction Replay Overhead    0.736643    0.925197    0.817188
     60          shared_replay_overhead   Shared Memory Replay Overhead    0.000000    0.000000    0.000000
     60          global_replay_overhead   Global Memory Replay Overhead    0.108972    0.108972    0.108972
     60    global_cache_replay_overhead  Global Memory Cache Replay Ove    0.000000    0.000000    0.000000
     60           local_replay_overhead  Local Memory Cache Replay Over    0.000000    0.000000    0.000000
     60                gld_transactions        Global Load Transactions       25000       25000       25000
     60                gst_transactions       Global Store Transactions       75000       75000       75000
     60  warp_nonpred_execution_efficie  Warp Non-Predicated Execution       99.63%      99.63%      99.63%
     60                       cf_issued  Issued Control-Flow Instructio       44911       45265       45101
     60                     cf_executed  Executed Control-Flow Instruct       39533       39533       39533
     60                     ldst_issued  Issued Load/Store Instructions      273117      353930      313341
     60                   ldst_executed  Executed Load/Store Instructio       50016       50016       50016
     60              stall_data_request  Issue Stall Reasons (Data Requ      65.21%      68.93%      67.86%
     60                   inst_executed           Instructions Executed      458686      458686      458686
     60                     inst_issued             Instructions Issued      789220      879145      837129
     60                     issue_slots                     Issue Slots      716816      803393      759614

The kernel uses 356 bytes cmem[0] and no shared memory. Also, no register spills. My question is, what is the reason for instruction replays in this case? We see an overhead of 81% but the numbers do not add up.

Thanks!


回答1:


some possible reasons:

  1. shared memory bank conflicts (which you don't have)
  2. constant memory conflicts (i.e. different threads in a warp requesting different locations in constant memory from the same instruction)
  3. warp-divergent code (if..then..else taking differnt paths for different threads in a warp)

This presentation may be of interest, especially slides 8-11.



来源:https://stackoverflow.com/questions/22103480/other-reasons-for-instruction-replays-in-cuda

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!