问题
I run simple test for OpenCV camera pose estimation. Having a photo and the same photo scaled up (zoomed in) I use them to detect features, calculate essential matrix and recover camera poses.
Mat inliers;
Mat E = findEssentialMat(queryPoints, trainPoints, cameraMatrix1, cameraMatrix2,
FM_RANSAC, 0.9, MAX_PIXEL_OFFSET, inliers);
size_t inliersCount =
recoverPose(E, queryGoodPoints, trainGoodPoints, cameraMatrix1, cameraMatrix2, R, T, inliers);
So when I specify the original image as the first one, and the zoomed image as the second one, I get translation T close to [0; 0; -1]. However the second camera (zoomed) is virtually closer to the object than the first one. So if Z-axis goes from image plane into the scene, the second camera should have positive offset along Z-axis. For the result I get, Z-axis goes from the image plane towards camera, which among with other axes (X goes right, Y goes down) forms left-handed coordinate system. Is that true? Why this result differs from the coordinate system illustrated here?
回答1:
According to the OpenCV document, the algorithm in the function recoverPose is based on the paper "Nistér, D. An efficient solution to the five-point relative pose problem, CVPR 2003." From equations in Section 2 in this paper, we know it uses the basic triangle relationship (see figure here):
x2 = R*x1 + t
Therefore, translation t is the vector from cam2 to cam1 in cam2 frame. This explains why you get the answer t close to [0; 0; -1].
回答2:
Seems the recoverPose() function returns the first camera transform relatively to the second one (which was not intuitive for me, and is not clearly stated in the documentation). With this assumption test works correctly.
来源:https://stackoverflow.com/questions/37810218/is-the-recoverpose-function-in-opencv-is-left-handed