Background
I am working on a 3D game using C++ and modern OpenGL (3.3). I am now working on the lighting and shadow rendering, and I've successfully implemented directional shadow mapping. After reading over the requirements for the game I have decided that I'd be needing point light shadow mapping. After doing some research, I discovered that to do omnidirectional shadow mapping I will do something similar to directional shadow mapping, but with a cubemap instead.
I have no previous knowledge of cubemaps but my understanding of them is that a cubemap is six textures, seamlessly attached. I did some looking around but unfortunately I struggled to find a definitive "tutorial" on the subject for modern OpenGL. I look for tutorials first that explain it from start to finish because I seriously struggled to learn from snippets of source code or just concepts, but I tried.
Current understandings
Here is my general understanding of the idea, minus the technicalities. Please correct me.
- For each point light, a framebuffer is set up, like directional shadowmapping
- A single cubemap texture is then generated, and bound with
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowmap)
. The cubemap is set up with the following attributes:
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
(this is also similar to directional shadowmapping)
Now
glTexImage2D()
is iterated through six times, once for each face. I do that like this:for (int face = 0; face < 6; face++) // Fill each face of the shadow cubemap glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_DEPTH_COMPONENT32F , 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
The texture is attached to the framebuffer with a call to
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, shadowmap, 0);
When the scene is to be rendered, it is rendered in two passes, like directional shadow mapping.
- First of all, the shadow framebuffer is bound, the viewport is adjusted to the size of the shadowmap (1024 by 1024 in this case).
- Culling is set to the front faces with
glCullFace(GL_FRONT)
- The active shader program is switched to the vertex and fragment shadow shaders that I will provide the sources of further down
The light view matrices for all six views are calculated. I do it by creating a vector of glm::mat4's and
push_back()
the matrices, like this:// Create the six view matrices for all six sides for (int i = 0; i < renderedObjects.size(); i++) // Iterate through all rendered objects { renderedObjects[i]->bindBuffers(); // Bind buffers for rendering with it glm::mat4 depthModelMatrix = renderedObjects[i]->getModelMatrix(); // Set up model matrix for (int i = 0; i < 6; i++) // Draw for each side of the light { glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, shadowmap, 0); glClear(GL_DEPTH_BUFFER_BIT); // Clear depth buffer // Send MVP for shadow map glm::mat4 depthMVP = depthProjectionMatrix * depthViewMatrices[i] * depthModelMatrix; glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "depthMVP"), 1, GL_FALSE, glm::value_ptr(depthMVP)); glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightViewMatrix"), 1, GL_FALSE, glm::value_ptr(depthViewMatrices[i])); glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightProjectionMatrix"), 1, GL_FALSE, glm::value_ptr(depthProjectionMatrix)); glDrawElements(renderedObjects[i]->getDrawType(), renderedObjects[i]->getElementSize(), GL_UNSIGNED_INT, 0); } }
The default framebuffer is bound, and the scene is drawn normally.
Issue
Now, to the shaders. This is where my understanding runs dry. I am completely unsure on what I should do, my research seems to conflict with eachother, because it's for different versions. I ended up blandly copying and pasting code from random sources, and hoping it'd achieve something other than a black screen. I know this is terrible, but there doesn't seem to be any clear definitions on what to do. What spaces do I work in? Do I even need a separate shadow shader, like I used in directional point lighting? What the hell do I use as the type for a shadow cubemap? samplerCube? samplerCubeShadow? How do I sample said cubemap properly? I hope that someone can clear it up for me and provide a nice explanation.
My current understanding of the shader part is:
- When the scene is being rendered into the cubemap, the vertex shader simply takes the depthMVP uniform I calculated in my C++ code and transforms the input vertices by them.
- The fragment shader of the cubemap pass simply assigns the single out value to the gl_FragCoord.z
. (This part is unchanged from when I implemented directional shadow mapping. I assumed it would be the same for cubemapping because the shaders don't even interact with the cubemap - OpenGL simply renders the output from them to the cubemap, right? Because it's a framebuffer?)
- The vertex shader for the normal rendering is unchanged.
- In the fragment shader for normal rendering, the vertex position is transformed into the light's space with the light's projection and view matrix.
- That's somehow used in the cubemap texture lookup. ???
- Once the depth has been retrieved using magical means, it is compared to the distance of the light to the vertex, much like directional shadowmapping. If it's less, that point must be shadowed, and vice-versa.
It's not much of an understanding. I go blank as to how the vertices are transformed and used to lookup the cubemap, so I'm going to paste the source for my shaders, in hope that people can clarify this. Please note that a lot of this code is blind copying and pasting, I haven't altered anything as to not jeopardise any understanding.
Shadow vertex shader:
#version 150
in vec3 position;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP * vec4(position, 1);
}
Shadow fragment shader:
#version 150
out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
Standard vertex shader:
#version 150
in vec3 position;
in vec3 normal;
in vec2 texcoord;
uniform mat3 modelInverseTranspose;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 fragnormal;
out vec3 fragnormaldirection;
out vec2 fragtexcoord;
out vec4 fragposition;
out vec4 fragshadowcoord;
void main()
{
fragposition = modelMatrix * vec4(position, 1.0);
fragtexcoord = texcoord;
fragnormaldirection = normalize(modelInverseTranspose * normal);
fragnormal = normalize(normal);
fragshadowcoord = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
}
Standard fragment shader:
#version 150
out vec4 outColour;
in vec3 fragnormaldirection;
in vec2 fragtexcoord;
in vec3 fragnormal;
in vec4 fragposition;
in vec4 fragshadowcoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrixInversed;
uniform mat4 lightViewMatrix;
uniform mat4 lightProjectionMatrix;
uniform sampler2D tex;
uniform samplerCubeShadow shadowmap;
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS)) // To avoid self shadowing, I guess
return 1.0;
return 0.7;
}
void main()
{
vec3 light_position = vec3(0.0, 0.0, 0.0);
vec3 VertToLightWS = light_position - fragposition.xyz;
outColour = texture(tex, fragtexcoord) * ComputeShadowFactor(shadowmap, VertToLightWS);
}
I can't remember where the ComputerShadowFactor and VectorToDepthValue function code came from, because I was researching it on my laptop which I can't get to right now, but this is the result of those shaders:
It is a small square of unshadowed space surrounded by shadowed space.
I am obviously doing a lot wrong here, probably centered on my shaders, due to a lack of knowledge on the subject because I find it difficult to learn from anything but tutorials, and I am very sorry for that. I am at a loss it it would be wonderful if someone can shed light on this with a clear explanation on what I am doing wrong, why it's wrong, how I can fix it and maybe even some code. I think the issue may be because I am working in the wrong spaces.
I hope to provide an answer to some of your questions, but first some definitions are required:
What is a cubemap?
It is a map from a direction vector to a pair of [face, 2d coordinates on that face], obtained by projecting the direction vector on an hypothetical cube.
What is an OpenGL cubemap texture?
It is a set of six "images".
What is a GLSL cubemap sampler?
It is a sampler primitive from which cubemap sampling can be done. This mean that it is sampled using a direction vector instead of the usual texture coordinates. The hardware then project the direction vector on an hypothetical cube and use the resulting [face, 2d texture coordinate] pair to sample the right "image" at the right 2d position.
What is a GLSL shadow sampler?
It is a sampler primitive that is bounded to a texture containing NDC-space depth values and, when sampled using the shadow-specific sampling functions, return a "comparison" between a NDC-space depth (in the same space of the shadow map, obviously) and the NDC-space depth stored inside the bounded texture. The depth to compare against is specified as an additional element in the texture coordinates when calling the sampling function. Note that shadow samplers are provided for ease of use and speed, but it is always possible to do the comparison "manually" in the shader.
Now, for your questions:
OpenGL simply renders [...] to the cubemap, right?
No, OpenGL render to a set of targets in the currently bounded framebuffer.
In the case of cubemaps, the usual way to render in them is:
- to create them and attach each of their six "images" to the same framebuffer (at different attachment points, obviously)
- to enable only one of the target at a time (so, you render in each cubemap face individually)
- to render what you want in the cubemap face (possibly using face-specific "view" and "projection" matrices)
Point-light shadow maps
In addition to everything said about cubemaps, there are a number of problems in using them to implement point-light shadow mapping and so the hardware depth comparison is rarely used.
Instead, what is common pratice is the following:
- instead of writing NDC-space depth, write radial distance from the point light
- when querying the shadow map (see sample code at bottom):
- do not use hardware depth comparisons (use samplerCube instead of samplerCubeShadow)
- transform the point to be tested in the "cube space" (that do not include projection at all)
- use the "cube-space" vector as the lookup direction to sample the cubemap
- compare the radial distance sampled from the cubemap with the radial distance of the tested point
Sample code
// sample radial distance from the cubemap
float radial_dist = texture(my_cubemap, cube_space_vector).x;
// compare against test point radial distance
bool shadowed = length(cube_space_vector) > radial_dist;
来源:https://stackoverflow.com/questions/13999830/pointers-on-modern-opengl-shadow-cubemapping