问题
I've two images captured from two cameras of same make placed some distance apart, capturing the same scene. I want to calculate the real world rotation and translation among the two cameras. In order to achieve this, I first extracted the SIFT features of both of the images and matched them.
I now have fundamental matrix as well as homography matrix. However unable to proceed further, lots of confusion. Can anybody help me to estimate the rotation and translation in between two cameras?
I'm using OpenCV for feature extraction and matching, homography calculations.
回答1:
If you have the Homography then you also have the rotation. Once you have homography it is easy to get rotation and translation matrix.
For example, if you are using OpenCV c++:
param[in] H
param[out] pose
void cameraPoseFromHomography(const Mat& H, Mat& pose)
{
pose = Mat::eye(3, 4, CV_32FC1); // 3x4 matrix, the camera pose
float norm1 = (float)norm(H.col(0));
float norm2 = (float)norm(H.col(1));
float tnorm = (norm1 + norm2) / 2.0f; // Normalization value
Mat p1 = H.col(0); // Pointer to first column of H
Mat p2 = pose.col(0); // Pointer to first column of pose (empty)
cv::normalize(p1, p2); // Normalize the rotation, and copies the column to pose
p1 = H.col(1); // Pointer to second column of H
p2 = pose.col(1); // Pointer to second column of pose (empty)
cv::normalize(p1, p2); // Normalize the rotation and copies the column to pose
p1 = pose.col(0);
p2 = pose.col(1);
Mat p3 = p1.cross(p2); // Computes the cross-product of p1 and p2
Mat c2 = pose.col(2); // Pointer to third column of pose
p3.copyTo(c2); // Third column is the crossproduct of columns one and two
pose.col(3) = H.col(2) / tnorm; //vector t [R|t] is the last column of pose
}
This function calculates de camera pose from homography, in which rotation is contained. For further theoretical info follow this thread.
来源:https://stackoverflow.com/questions/12658839/multiple-view-geometry