Originally I have a image with a perfect circle grid, denoted as A I add some lens distortion and perspective transformation to it, and it becomes B In camera calibration, A would be my destination image, and B would be my source image. Let's say I have all the circle center coordinates in both images, stored in stdPts and disPts.
//25 center pts in A
vector<Point2f> stdPts;
for (int i = 0; i <= 4; ++i) {
for (int j = 0; j <= 4; ++j) {
stdPts[i * 5 + j].x = 250 + i * 500;
stdPts[i * 5 + j].y = 200 + j * 400;
}
}
//25 center pts in B
vector<Point2f> disPts = FindCircleCenter();
I want to generate an image C that is as close as A, from input: B, stdPts and disPts. I tried to use the intrinsic and extrinsic generated by cv::calibrateCamera. Here is my code:
//prepare object_points and image_points
vector<vector<Point3f>> object_points;
vector<vector<Point2f>> image_points;
object_points.push_back(stdPts);
image_points.push_back(disPts);
//prepare distCoeffs rvecs tvecs
Mat distCoeffs = Mat::zeros(5, 1, CV_64F);
vector<Mat> rvecs;
vector<Mat> tvecs;
//prepare camera matrix
Mat intrinsic = Mat::eye(3, 3, CV_64F);
//solve calibration
calibrateCamera(object_points, image_points, Size(2500,2000), intrinsic, distCoeffs, rvecs, tvecs);
//apply undistortion
string inputName = "../B.jpg";
Mat imgB = imread(imgName);
cvtColor(imgB, imgB, CV_BGR2GRAY)
Mat tempImgC;
undistort(imgB, tempImgC, intrinsic, distCoeffs);
//apply perspective transform
double transData[] = { 0, 0, tvecs[0].at<double>(0), 0, 0,,tvecs[0].at<double>(1), 0, 0, tvecs[0].at<double>(2) };
Mat translate3x3(3, 3, CV_64F, transData);
Mat rotation3x3;
Rodrigues(rvecs[0], rotation3x3);
Mat transRot3x3(3, 3, CV_64F);
rotation3x3.col(0).copyTo(transRot3x3.col(0));
rotation3x3.col(1).copyTo(transRot3x3.col(1));
translate3x3.col(2).copyTo(transRot3x3.col(2));
Mat imgC;
Mat matPerspective = intrinsic*transRot3x3;
warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000));
//write
string outputName = "../C.jpg";
imwrite(outputName, imgC); // A JPG FILE IS BEING SAVED
And here is the result image C, which doesn't deal with the perspective transformation at all.
So could someone teach me how to recover A? Thanks.
Added
OK, guys, simple mistake. I previously used warpPerspective to warp images instead of restoring. Since it works that way, I didn't read the doc thoroughly. It turns out that if it is for restoring, the flag WARP_INVERSE_MAP should be set. Change the function call to this, and that's it.
warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000), WARP_INVERSE_MAP);
Here is the new result image C:
The only thing concerns me now is the intermediary tempImgC, which is the image after undistort and before warpPerspective. In some tests with different artificial B s, This image could turn out to be a scaled-up version of B with distortion removed. That means a lot of information is lost in the outter area. And there is not much to use for warpPerspective. I'm thinking maybe to scale down the image in undistort and to scale it up in warpPerspective. But I'm not sure yet how to calculate the correct scale to preserve all the information in B.
Added 2
The last piece of the puzzle is in place. Call getOptimalNewCameraMatrix before undistort to generate the new camera matrix that preserves all the info in B. And pass this new camera matrix to undistort and warpPerspective.
Mat newIntrinsic=getOptimalNewCameraMatrix(intrinsic, distCoeffs, Size(2500, 2000), 1);
undistort(imgB, tempImgC, intrinsic, distCoeffs, newIntrinsic);
Mat matPerspective = newIntrinsic*transRot3x3;
warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000), WARP_INVERSE_MAP);
The result image C is the same in this case. But there is a big difference for other cases. For example, with another distorted image B1. The result image C1 without new camera matrix looks like this. And the result image C1 with new camera matrix maintains the information in B1
Added 3
I realized that as each frame captured by camera needs processing and efficiency is important, I can't afford to use undistort
and warpPerspective
for each frame. It's only reasonable to have one map and use remap
for each frame.
Actually, there is a straight forward way to that, which is projectPoints
. Since it generates the map from destination image to source image directly, no intermediary image is needed and thus loss of information is avoided.
// ....
//solve calibration
//generate a 3-channel mat with each entry containing it's own coordinates
Mat xyz(2000, 2500, CV_32FC3);
float *pxyz = (float*)xyz.data;
for (int y = 0; y < 2000; y++)
for (int x = 0; x < 2500; x++)
{
*pxyz++ = x;
*pxyz++ = y;
*pxyz++ = 0;
}
// project coordinates of destination image,
// which generates the map from destination image to source image directly
xyz=xyz.reshape(0, 5000000);
Mat mapToSrc(5000000, 1, CV_32FC2);
projectPoints(xyz, rvecs[0], tvecs[0], intrinsic, distCoeffs, mapToSrc);
Mat maps[2];
mapToSrc = mapToSrc.reshape(0, 2000);
split(mapToSrc, maps);
//apply map
remap(imgB, imgC, maps[0], maps[1], INTER_LINEAR);
来源:https://stackoverflow.com/questions/46679422/opencv-get-topdown-view-of-planar-pattern-by-using-intrinsic-and-extrinsic-from