What would be the best way of offsetting depth in OpenGL? I currently have index vertex attribute per polygon which I am passing to the Vertex Shader in OpenGL. My goal is t
The usual way to set an automatic offset for the depth is glPolygonOffset(GLfloat factor,GLfloat units)
When
GL_POLYGON_OFFSET_FILL
,GL_POLYGON_OFFSET_LINE
, orGL_POLYGON_OFFSET_POINT
is enabled, each fragment's depth value will be offset after it is interpolated from the depth values of the appropriate vertices. The value of the offset isfactor * DZ + r * units
, whereDZ
is a measurement of the change in depth relative to the screen area of the polygon, andr
is the smallest value that is guaranteed to produce a resolvable offset for a given implementation. The offset is added before the depth test is performed and before the value is written into the depth buffer.
glEnable( GL_POLYGON_OFFSET_FILL );
glPolygonOffset( 1.0, 1.0 );
If you want to manually manipulate the depth then you have to set gl_FragDepth
inside the fragment shader.
gl_FragDepth, Fragment Shader:
Available only in the fragment language,
gl_FragDepth
is an output variable that is used to establish the depth value for the current fragment. If depth buffering is enabled and no shader writes togl_FragDepth
, then the fixed function value for depth will be used (this value is contained in the z component ofgl_FragCoord
) otherwise, the value written togl_FragDepth
is used.
In general, gl_FragDepth
is calculated as follows (see GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth):
float ndc_depth = clip_space_pos.z / clip_space_pos.w;
gl_FragDepth = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;
The minimum offset you need to add or subtract to the depth to get a minimum difference depends on the format of the depth buffer.
The depth buffer formats GL_DEPTH_COMPONENT16
, GL_DEPTH_COMPONENT24
and GL_DEPTH_COMPONENT32
are a normalized integer formats,
where the 16, 24 or 32 bit integer range is maped onto the depth values [0, 1].
On the other hand, the format GL_DEPTH_COMPONENT32F
is a IEEE 754 standard 32 bit floating point format.