问题
I am using the OpenGL Compute Shaders to do some calculation on data. Everything works fine except that it does not seem to be possible to run one shader much more than 10 seconds.
I measure the time with glBeginQuery(...)
and glEndQuery(...)
. The shader runs between 1 ms and 10 seconds good. I just add some data without any shader invocations to increase the time the shader needs. But I can not add more Data when the shader needs a bit more than 10 seconds. Then, the program freezes and I can not do anything more. The highest value I measured, was 11.02 seconds.
So, is there a time border for compute shaders? Or is there something obvious which I made wrong?
Some additional information: I work on a notebook with an Nvidia GT 555M in it. I use bumblebee and start the QtCreator with optirun
to run it with the Nvidia card.
If you need more information to help me, please, just ask. I just do not know, what is needed to answer it.
回答1:
It sounds like you're hitting Windows' Timeout Detection and Recovery limit.
Usually the default for this is 2 seconds, not 10 or 11, but it's possible something or someone has modified your registry.
Take a look here and see whether your registry keys are setup such that you would get a 10/11 second TDR.
If this is your own app that you only intend to run on your own machine then feel free to increase this limit to whatever you like. However if you intend to have other people run your application then don't expect anyone else to modify their registry settings to accommodate your application.
You should be breaking up your Compute work into manageable chunks of work that take (at most) tens of milliseconds, not thousands.
EDIT:
Since you've commented to say that you're using Linux, the NVIDIA driver has an option to override the 'watchdog' timer.
来源:https://stackoverflow.com/questions/29745168/is-there-a-time-border-for-opengl-compute-shaders