I added a new GL renderer to my engine, which uses the core profile. While it runs fine on Windows and/or nvidia cards, it is like 10 times slower on OS X (
I have managed to get myself the same problem in the following circumstance under
OS X Mavericks:
Instanced rendering using array buffers to give each instance its own modelToWorld
and inverseNormal
matrices; attribute locations are being specified through layout rather than using glGetAttribLocation
leaving one of these array buffers unused in the shader
, where location is declared but the attribute isn't actually used for anything in the glsl
code
In this case, a call to glDrawElementsInstanced
takes up a LOT of CPU time (under normal circumstances, this call uses nearly zero CPU even when drawing several thousand instances).
You can tell that you're getting this specific problem if almost all of the CPU time used within glDrawElementsInstanced
is spent in gleDrawArraysOrElements_ExecCore
. Making sure that all of the array buffers are actually referenced in your shader
code fixes the CPU time back to (nearly) zero.
I suspect that this is one of the situations where leaving a variable out of your main() in glsl
confuses the compiler in to deleting all reference to that variable, leaving you with a dangling reference to an attribute or uniform.