the internalformat of Texture

后端 未结 2 1069
挽巷
挽巷 2021-02-11 07:36

Look at the following OpenGL function:

void glTexImage2D(GLenum    target,
                  GLint     level,
                  GLint     internalFormat,
                


        
相关标签:
2条回答
  • 2021-02-11 07:44

    The format and type parameters describe the data you are passing to OpenGL as part of a pixel transfer operation. The internalformat describes the format of the texture. You're telling OpenGL that you're giving it takes that looks like X, and OpenGL is to store it in a texture where the data is Y. The internalformat is "Y".

    The GL_LUMINANCE8 internal format represents a normalized unsigned integer format. This means that the data is conceptually floating-point, but stored in a normalized integer form as a means of compression.

    For that matter, the format of GL_LUMINANCE says that you're passing either floating-point data or normalized integer data (the type says that it's normalized integer data). Of course, since there's no GL_LUMINANCE_INTEGER (which is how you say that you're passing integer data, to be used with integer internal formats), you can't really use luminance data like this.

    Use GL_RED_INTEGER for the format and GL_R8UI for the internal format if you really want 8-bit unsigned integers in your texture. Note that integer texture support requires OpenGL 3.x-class hardware.

    That being said, you cannot use sampler2D with an integer texture. If you are using a texture that uses an unsigned integer texture format, you must use usampler2D.

    0 讨论(0)
  • 2021-02-11 07:47

    How the value is stored internally is not necessarily relevant to how you would access it in GLSL. Using normalised colour values (0-1) is much easier in practice. Is there some reason you want to manipulate pixel values in your pixel shaders in the range of (0-255)?

    0 讨论(0)
提交回复
热议问题