Lets say I have 2 species such as humans and ponies. They have different skeletal systems so the uniform bone array will have to be different for each species. Do I have to
Until OpenGL 4.3, arrays in GLSL had to be of a fixed, compile-time size. 4.3 allows the use of shader storage buffer objects, which allow for their ultimate length to be "unbounded". Basically, you can do this:
buffer BlockName
{
mat4 manyManyMatrices[];
};
OpenGL will figure out how many matrices are in this array at runtime based on how you use glBindBufferRange. So you can still use manyManyMatrices.length()
to get the length, but it won't be a compile-time constant.
However, this feature is (at the time of this edit) very new and only implemented in beta. It also requires GL 4.x-class hardware (aka: Direct3D 11-class hardware). Lastly, since it uses shader storage blocks, accessing the data may be slower than one might hope for.
As such, I would suggest that you just use a uniform block with the largest number of matrices that you would use. If that becomes a memory issue (unlikely), then you can split your shaders based on array size or use shader storage blocks or whatever.
You can use n-by-1-Textures as a replacement for arrays. Texture size can be specified at run-time. I use this approach for passing an arbitrary number of lights to my shaders. I'm surprised how fast it runs despite the many loops and branches. For an example see the polygon.f shader file in the jogl3.glsl.nontransp in the jReality sources.
uniform sampler2D sys_globalLights;
uniform int sys_numGlobalDirLights;
uniform int sys_numGlobalPointLights;
uniform int sys_numGlobalSpotLights;
...
int lightTexSize = sys_numGlobalDirLights*3+sys_numGlobalPointLights*3+sys_numGlobalSpotLights*5;
for(int i = 0; i < numDir; i++){
vec4 dir = texture(sys_globalLights, vec2((3*i+1+0.5)/lightTexSize, 0));
...