I'm trying to create a procedural animation engine for a simple 2D game, that would let me create nice looking animations out of a small number of images (similar to this approach, but for 2D: http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach)
At the moment I have keyframes which hold data for different animation objects, the keyframes are arrays of floats representing the following:
translateX, translateY, scaleX, scaleY, rotation (degrees)
I'd like to add skewX, skewY, taperTop, and taperBottom to this list, but I'm having trouble properly rendering them.
This was my attempt at implementing a taper to the top of the sprite to give it a trapezoid shape:
float[] vert = sprite.getVertices(); vert[5] += 20; // top-left vertex x co-ordinate vert[10] -= 20; // top-right vertex x co-ordinate batch.draw(texture, vert, 0, vert.length);
Unfortunately this is producing some weird texture morphing.
I had a bit of a Google and a look around StackOverflow and found this, which appears to be the problem I'm having:
http://www.xyzw.us/~cass/qcoord/
However I don't understand the maths behind it (what are s, t, r and q?).
Can someone explain it a bit simpler?
Basically, the less a quad resembles a rectangle, the worse the appearance due to the effect of linearly interpolating the texture coordinates across the shape. The two triangles that make up the quad are stretched to different sizes, so linear interpolation make the seam very noticeable.
The texture coordinates of each vertex are linearly interpolated for each fragment that the fragment shader processes. Texture coordinates typically are stored with the size of the object already divided out, so the coordinates are in the range of 0-1, corresponding with the edges of the texture (and values outside this range are clamped or wrapped around). This is also typically how any 3D modeling program exports meshes.
With a trapezoid, we can limit the distortion by pre-multiplying the texture coordinates by the width and then post-dividing the width out of the texture coordinates after linear interpolation. This is like curving the diagonal between the two triangles such that its slope is more horizontal at the corner that is on the wider side of the trapezoid. Here's an image that helps illustrate it.
Texture coordinates are usually expressed as a 2D vector with components U and V, also known as S and T. But if you want to divide the size out of the components, you need one more component that you are going to divide by after interpolation, and this is called the Q component. (The P component would be used as the third position in the texture if you were looking up something in a 3D texture instead of a 2D texture).
Now here comes the hard part... libgdx's SpriteBatch doesn't support the extra vertex attribute necessary for the Q component. So you can either clone SpriteBatch and carefully go through and modify it to have an extra component in the texCoord attribute, or you can try to re-purpose the existing color attribute, although it's stored as an unsigned byte.
Regardless, you will need pre-width-divided texture coordinates. One way to simplify this is to, instead of using the actual size of the quad for the four vertices, get the ratio of the top and bottom widths of the trapezoid, so we can treat the top parts as width of 1 and therefore leave them alone.
float bottomWidth = taperBottom / taperTop;
Then you need to modify the TextureRegion's existing texture coordinates to pre-multiply them by the widths. We can leave the vertices on the top side of the trapezoid alone because of the above simplification, but the U and V coordinates of the two narrow-side vertices need to be multiplied by bottomWidth
. You would need to recalculate them and put them into your vertex array every time you change the TextureRegion or one of the taper values.
In the vertex shader, you would need to pass the extra Q component to the fragment shader. In the fragment shader, we normally look up our texture color using the size-divided texture coordinates like this:
vec4 textureColor = texture2D(u_texture, v_texCoords);
but in our case we still need to divide by that Q component:
vec4 textureColor = texture2D(u_texture, v_texCoords.st / v_texCoords.q);
However, this causes a dependent texture read because we are modifying a vector before it is passed into the texture function. GLSL provides a function that automatically does the above (and I assume does not cause a dependent texture read):
vec4 textureColor = texture2DProj(u_texture, v_texCoords); //first two components automatically divided by last component