the internalformat of Texture

后端 未结 2 1063
挽巷
挽巷 2021-02-11 07:36

Look at the following OpenGL function:

void glTexImage2D(GLenum    target,
                  GLint     level,
                  GLint     internalFormat,
                


        
2条回答
  •  小鲜肉
    小鲜肉 (楼主)
    2021-02-11 07:44

    The format and type parameters describe the data you are passing to OpenGL as part of a pixel transfer operation. The internalformat describes the format of the texture. You're telling OpenGL that you're giving it takes that looks like X, and OpenGL is to store it in a texture where the data is Y. The internalformat is "Y".

    The GL_LUMINANCE8 internal format represents a normalized unsigned integer format. This means that the data is conceptually floating-point, but stored in a normalized integer form as a means of compression.

    For that matter, the format of GL_LUMINANCE says that you're passing either floating-point data or normalized integer data (the type says that it's normalized integer data). Of course, since there's no GL_LUMINANCE_INTEGER (which is how you say that you're passing integer data, to be used with integer internal formats), you can't really use luminance data like this.

    Use GL_RED_INTEGER for the format and GL_R8UI for the internal format if you really want 8-bit unsigned integers in your texture. Note that integer texture support requires OpenGL 3.x-class hardware.

    That being said, you cannot use sampler2D with an integer texture. If you are using a texture that uses an unsigned integer texture format, you must use usampler2D.

提交回复
热议问题