问题
In the hope for a broader audience, I repost my question here which I asked on answers.opencv.org as well.
TL;DR: What relation should hold between the arguments passed to undistortPoints
, findEssentialMat
and recoverPose
?
I have code like the following in my program, with K
and dist_coefficients
being camera intrinsics and imgpts.
matching feature points from 2 images.
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
Mat E = findEssentialMat(imgpts1, imgpts2, 1, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
I undistort
the Points before finding the essential matrix. The doc states that one can pass the new camera matrix as the last argument. When omitted, points are in normalized coordinates (between -1 and 1). In that case, I would expect that I pass 1 for the focal length and (0,0) for the principal point to findEssentialMat
, as the points are normalized. So I would think this to be the way:
Possibility 1 (normalize coordinates)
Mat mask; // inlier mask undistortPoints(imgpts1, imgpts1, K, dist_coefficients); undistortPoints(imgpts2, imgpts2, K, dist_coefficients); Mat E = findEssentialMat(imgpts1, imgpts2, 1.0, Point2d(0,0), RANSAC, 0.999, 3, mask); correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2); recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
Possibility 2 (do not normalize coordinates)
Mat mask; // inlier mask undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K); undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K); double focal = K.at<double>(0,0); Point2d principalPoint(K.at<double>(0,2), K.at<double>(1,2)); Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 3, mask); correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2); recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask);
However, I have found, that I only get reasonable results when I tell undistortPoints
that the old camera matrix shall still be valid (I guess in that case only distortion is removed) and pass arguments to findEssentialMat
as if the points were normalized, which they are not.
Is this a bug, insufficient documentation or user error?
Update
It might be that correctedMatches
should be called with (non-normalised) image/pixel coordinates and the Fundamental Matrix, not E, this may be another mistake in my computation. It can be obtained by F = K^-T * E * K^-1
回答1:
As it turns out, my data seemingly is off. By using manually labelled correspondences I determined that Possibility 1 and 2 are indeed the correct ones, as one would expect.
来源:https://stackoverflow.com/questions/31290414/undistortpoints-findessentialmat-recoverpose-what-is-the-relation-between-the