Using estimateRigidTransform instead of findHomography

只愿长相守 提交于 2019-12-03 08:30:50

I've done it this way in the past:

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

    if(R.cols == 0)
    {
        continue;
    }

    cv::Mat H = cv::Mat(3,3,R.type());
    H.at<double>(0,0) = R.at<double>(0,0);
    H.at<double>(0,1) = R.at<double>(0,1);
    H.at<double>(0,2) = R.at<double>(0,2);

    H.at<double>(1,0) = R.at<double>(1,0);
    H.at<double>(1,1) = R.at<double>(1,1);
    H.at<double>(1,2) = R.at<double>(1,2);

    H.at<double>(2,0) = 0.0;
    H.at<double>(2,1) = 0.0;
    H.at<double>(2,2) = 1.0;


    cv::Mat warped;
    cv::warpPerspective(img1,warped,H,img1.size());

which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix

this code warps the IMAGES with a rigid transformation.

I you want to warp/transform the points, you must use perspectiveTransform function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )

tutorial here:

http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html

or you can do it manually by looping over your vector and

cv::Point2f result;
result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);

hope that helps.

remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!

edit: this is the full (tested) code:

// points
std::vector<cv::Point2f> p1;
p1.push_back(cv::Point2f(0,0));
p1.push_back(cv::Point2f(1,0));
p1.push_back(cv::Point2f(0,1));

// simple translation from p1 for testing:
std::vector<cv::Point2f> p2;
p2.push_back(cv::Point2f(1,1));
p2.push_back(cv::Point2f(2,1));
p2.push_back(cv::Point2f(1,2));

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);

H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);

H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;

// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H);

for(unsigned int i=0; i<result.size(); ++i)
    std::cout << result[i] << std::endl;

which gives output as expected:

[1, 1]
[2, 1]
[1, 2]

The affine transformations (the result of cv::estimateRigidTransform) are applied to the image with the function cv::warpAffine.

The 3x3 homography form of a rigid transform is:

 a1 a2 b1
-a2 a3 b2
  0  0  1

So when using estimateRigidTransform you could add [0 0 1] as the third row, if you want the 3x3 matrix.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!