Displaying images on a computer monitor involves the usage of a graphic API, which dispatches a series of asynchronous calls... and at some given time, put the wanted stuff on the computer screen.
But, what if you are interested in knowing the exact CPU time at the point where the required image is fully drawn (and visible to the user)?
I really need to grab a CPU timestamp when everything is displayed to relate this point in time to other measurements I take.
Without taking account of the asynchronous behavior of the graphic stack, many things can get the length of the graphic calls to jitter:
- multi-threading;
- Sync to V-BLANK (unfortunately required to avoid some tearing);
- what else have I forgotten? :P
I target a solution on Linux, but I'm open to any other OS. I've already studied parts of the xvideo extension for X.org server and the OpenGL API but I havent found an effective solution yet.
I only hope the solution doesn't involve hacking into video drivers / hardware!
Note: I won't be able to use the recent Nvidia G-SYNC thing on the required hardware. Although, this technology would get rid of some of the unpredictable jitter, I think it wouldn't completely solve this issue.
OpenGL Wiki suggests the following: "If GPU<->CPU synchronization is desired, you should use a high-precision/multimedia timer rather than glFinish after a buffer swap."
Does somebody knows how properly grab such a high-precision/multimedia timer value just after the swapBuffer call is completed in the GPU queue?
Recent OpenGL provides sync/fence objects. You can place sync objects in the OpenGL command stream and later wait for them to get passed. See http://www.opengl.org/wiki/Sync_Object
来源:https://stackoverflow.com/questions/19772758/how-do-you-know-what-youve-displayed-is-completely-drawn-on-screen