Opengl: Use single channel texture as alpha channel to display text

匿名 (未验证) 提交于 2019-12-03 09:02:45

问题:

What I'm trying to do is load a texture into hardware from a single channel data array and use it's alpha channel to draw text onto an object. I am using opengl 4.

If I try to do this using a 4 channel RGBA texture it works perfectly fine but for whatever reason when I try and load in a single channel only I get a garbled image and I can't figure out why. I create the texture by combing texture bitmap data for a series of glyphs with the following code into a single texture:

int texture_height = max_height * new_font->num_glyphs; int texture_width = max_width;  new_texture->datasize = texture_width * texture_height; unsigned char* full_texture = new unsigned char[new_texture->datasize];  // prefill texture as transparent for (unsigned int j = 0; j < new_texture->datasize; j++)     full_texture[j] = 0;  for (unsigned int i = 0; i < glyph_textures.size(); i++) {     // set height offset for glyph     new_font->glyphs[i].height_offset = max_height * i;     for (unsigned int j = 0; j < new_font->glyphs[i].height; j++) {         int full_disp = (new_font->glyphs[i].height_offset + j) * texture_width;         int bit_disp = j * new_font->glyphs[i].width;         for (unsigned int k = 0; k < new_font->glyphs[i].width; k++) {             full_texture[(full_disp + k)] =                     glyph_textures[i][bit_disp + k];         }     } } 

Then I load the texture data calling:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture)); 

My fragment shader executes the following code:

#version 330  uniform sampler2D texture;  in vec2 texcoord; in vec4 pass_colour;  out vec4 out_colour;  void main() { float temp = texture2D(texture, texcoord).r; out_colour = vec4(pass_colour[0], pass_colour[1], pass_colour[2], temp); } 

I get an image that I can tell is generated from the texture but it is terribly distorted and I'm unsure why. Btw I'm using GL_RED because GL_ALPHA was removed from Opengl 4. What really confuses me is why this works fine when I generate a 4 RGBA texture from the glyphs and then use it's alpha channel??

回答1:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture)); 

This is technically legal but never a good idea.

First, you need to understand what the third parameter to glTexImage2D is. That's the actual image format of the texture. You are not creating a texture with one channel; you're creating a texture with four channels.

Next, you need to understand what the last three parameters do. These are the pixel transfer parameters; they describe what the pixel data you're giving to OpenGL looks like.

This command is saying, "create a 4 channel texture, then upload some data to just the red channel. This data is stored as an array of unsigned bytes." Uploading data to only some of the channels of a texture is technically legal, but almost never a good idea. If you're creating a single-channel texture, you should use a single-channel texture. And that means a proper image format.

Next, things get more confusing:

new_texture->datasize = texture_width * texture_height*4; 

Your use of "*4" strongly suggests that you're creating four-channel pixel data. But you're only uploading one-channel data. The rest of your computations agree with this; you don't seem to ever fill in any data pass full_texture[texture_width * texture_height]. So you're probably allocating more memory than you need.

One last thing: always use sized internal formats. Never just use GL_RGBA; use GL_RGBA8 or GL_RGBA4 or whatever. Don't let the driver pick and hope it gives you a good one.

So, the correct upload would be this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, full_texture); 

FYI: the reinterpret_cast is unnecessary; even in C++, pointers can implicitly be converted into void*.



回答2:

I think you swapped the "internal format" and "format" parameters of glTexImage2d(). That is, you told it that you want RGBA in the texture object, but only had RED in the file data rather than vice-versa.

Try to replace your call with the following:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, texture->x, texture->y, 0, GL_RGBA, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture)); 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!