问题
I have a supposedly simple task, but apparently I still don't understand how projections work in shaders. I need to do a 2D perspective transformation on a texture quad (2 triangles), but visually it doesn't look correct (e.g. trapezoid is slightly higher or more stretched than what it is in the CPU version).
I have this struct:
struct VertexInOut
{
float4 position [[position]];
float3 warp0;
float3 warp1;
float3 warp2;
float3 warp3;
};
And in the vertex shader I do something like (texCoords
are pixel coords of the quad corners and homography is calculated in pixel coords):
v.warp0 = texCoords[vid] * homographies[0];
Then in the fragment shader like this:
return intensity.sample(s, inFrag.warp0.xy / inFrag.warp0.z);
The result is not what I expect. I spent hours on this, but I cannot figure it out. venting
UPDATE:
These are code and result for CPU (aka expected result):
// _image contains the original image
cv::Matx33d h(1.03140473, 0.0778113901, 0.000169219566,
0.0342947133, 1.06025684, 0.000459250761,
-0.0364957005, -38.3375587, 0.818259298);
cv::Mat dest(_image.size(), CV_8UC4);
// h is transposed because OpenCV is col major and using backwarping because it is what is used on the GPU, so better for comparison
cv::warpPerspective(_image, dest, h.t(), _image.size(), cv::WARP_INVERSE_MAP | cv::INTER_LINEAR);
These are code and result for GPU (aka wrong result):
// constants passed in buffers, image size 320x240
const simd::float4 quadVertices[4] =
{
{ -1.0f, -1.0f, 0.0f, 1.0f },
{ +1.0f, -1.0f, 0.0f, 1.0f },
{ -1.0f, +1.0f, 0.0f, 1.0f },
{ +1.0f, +1.0f, 0.0f, 1.0f },
};
const simd::float3 textureCoords[4] =
{
{ 0, IMAGE_HEIGHT, 1.0f },
{ IMAGE_WIDTH, IMAGE_HEIGHT, 1.0f },
{ 0, 0, 1.0f },
{ IMAGE_WIDTH, 0, 1.0f },
};
// vertex shader
vertex VertexInOut homographyVertex(uint vid [[ vertex_id ]],
constant float4 *positions [[ buffer(0) ]],
constant float3 *texCoords [[ buffer(1) ]],
constant simd::float3x3 *homographies [[ buffer(2) ]])
{
VertexInOut v;
v.position = positions[vid];
// example homography
simd::float3x3 h = {
{1.03140473, 0.0778113901, 0.000169219566},
{0.0342947133, 1.06025684, 0.000459250761},
{-0.0364957005, -38.3375587, 0.818259298}
};
v.warp = h * texCoords[vid];
return v;
}
// fragment shader
fragment int4 homographyFragment(VertexInOut inFrag [[stage_in]],
texture2d<uint, access::sample> intensity [[ texture(1) ]])
{
constexpr sampler s(coord::pixel, filter::linear, address::clamp_to_zero);
float4 targetIntensity = intensityRight.sample(s, inFrag.warp.xy / inFrag.warp.z);
return targetIntensity;
}
Original image:
UPDATE 2:
Contrary to the common belief that the perspective divide should be done in the fragment shader, I get a much more similar result if I divide in the vertex shader (and no distortion or seam between triangles), but why?
UPDATE 3:
I get the same (wrong) result if:
- I move the perspective divide to the fragment shader
- I simply remove the divide from the code
Very strange, it looks like the divide is not happening.
回答1:
OK, the solution was of course a very small detail: the division of simd::float3
behaves absolutely nuts. In fact, if I do the perspective divide in the fragment shader like this:
float4 targetIntensity = intensityRight.sample(s, inFrag.warp.xy * (1.0 / inFrag.warp.z));
it works!
Which lead me to find out that multiplying by the pre-divided float is different than dividing by a float. The reason for this is still unknown to me, if anyone knows why we can unravel this mystery.
来源:https://stackoverflow.com/questions/31925583/how-to-use-a-3x3-2d-transformation-in-a-vertex-fragment-shader-metal