问题
I can't for the life of me workout how to write to a depth texture using imageStore
within a compute shader, I've checked what I'm doing against several examples (e.g. this and this), but I still can't spot the fault. I can write to the texture as a Framebuffer, and when calling glTexImage2D()
, but for some reason executing this compute shader doesn't affect the named texture (which i'm checking via rendering to screen).
You can skip straight to the below accepted answer if the above applies.
Below I've extracted the relevant code in a clearer format, it's part of a much larger project, which wraps common GL operations in classes with a bunch of error checking. But I've written the project myself, so I know what is and isn't being called.
I have my compute shader, this is very simple, it should write 0.5f
to every pixel, (which in my debugging render would output as cyan).
#version 430
layout(local_size_x=16,local_size_y=16) in;
uniform uvec2 _imageDimensions;
uniform layout (r32f) writeonly image2D _imageOut;
void main ()
{
if(gl_GlobalInvocationID.x<_imageDimensions.x
&&gl_GlobalInvocationID.y<_imageDimensions.y)
{
imageStore(_imageOut, ivec2(gl_GlobalInvocationID.xy), vec4(0.5f));
}
}
I create the texture using
glm::uvec2 shadowDims = glm::uvec2(4096);
GLuint shadowOut;
glGenTextures(1, &shadowOut);
glBindTexture(GL_TEXTURE_2D, shadowOut);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, shadowDims.x, shadowDims.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glBindTexture(GL_TEXTURE_2D, 0);
I configure the shader using
glUseProgram(computeShader)
//Dimensions
GLint location = glGetUniformLocation(computeShader, "_imageDimensions");
glUniform2uiv(location, 1, glm::value_ptr(shadowDims));
//Image unit
const int IMAGE_UNIT = 0;
location = glGetUniformLocation(computeShader, "_imageOut");
glUniform1iv(location , 1, &IMAGE_UNIT);
glUseProgram(0)
and finally I launch the shader with
glUseProgram(computeShader)
glBindImageTexture(IMAGE_UNIT, shadowOut, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
//Split the image into work groups of size 16x16
glm::uvec2 launchConfig = shadowDims/ glm::uvec2(16) + glm::uvec2(1);
glDispatchCompute(launchConfig.x, launchConfig.y, 1);
//Synchronise written memory
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
I've tried a few tweaks, but whatever I do, the texture remains rendering however I first configure it at initialisation.
There are no GL errors occurring (I have a preprocessor macro I wrap all gl fn's with), and although my actual executed code is more abstract than that provided above, i'm confident there's no bugs causing vars to be lost/changed.
回答1:
I can't for the life of me workout how to write to a (depth) texture using imageStore within a compute shader
That's because you can't.
Image Load/Store can only be used to read/write color image formats. Depth and/or stencil formats cannot use it.
And no, you cannot use glCopyImageSubData
to copy between color and depth images. Compute shaders can read depth/stencil formats, but only through sampler
s, not through image
variables.
来源:https://stackoverflow.com/questions/41108976/imagestore-in-compute-shader-to-depth-texture