问题
I need to pass a 2D texture array to a fragment shader that performs texelFetch
and bitwise operations over uchar
elements.
For this reason, I need to create a texture with a very specific internal format, that guarantees that my texels are in unsigned byte integer, and that no scaling operations not casting to float is performed.
I already have my image buffers in generic uint8_t*
arrays.
So I am doing this:
osg::Texture2DArray texture = new osg::Texture2DArray;
for (int i = 0; i < ntextures; i++) {
osg::Image* image = new osg::Image(
textureWidth,
256,
1,
GL_R8UI, // Texture Internal Format
GL_RED_INTEGER, // Pixel Format
GL_UNSIGNED_BYTE, // Pixel Type
imageBufferPtr[i]);
texture->setImage(i, image);
}
The previous code yeilds Invalid operation
errors in the OpenGL pipeline.
On the other hand, if I do
osg::Image* image = new osg::Image(
textureWidth,
256,
1,
GL_LUMINANCE, // Texture Internal Format
GL_LUMINANCE, // Pixel Format
GL_UNSIGNED_BYTE, // Pixel Type
imageBufferPtr[i]);
yields no errors, but my texture gets corrupted somehow (some cast operations are probably performed).
How can I solve this?
I am ensuring that no texture resizing to power of 2 is performed.
Extra
This is my fragment shader
#version 130
#extension GL_EXT_texture_array : enable
in vec4 texcoord;
uniform uint width;
uniform uint height;
uniform usampler2DArray textureA;
void main() {
uint x = uint(texcoord.x * float(width));
uint y = uint(texcoord.y * float(height));
uint shift = x % 8u;
uint mask = 1u << shift;
uint octet = texelFetch(textureA, ivec3(x / 8u, y % 256u,y / 256u), 0).r;
uint value = (octet & mask) >> shift;
if (value > 0u)
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
else
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
来源:https://stackoverflow.com/questions/40176720/openscenegraph-opengl-image-internalformat