OpenCV extrinsic camera from feature points

浪尽此生 提交于 2019-11-28 19:41:54

I was confronted with the same problem as you, in OpenCV. I had a stereo image pair and I wanted to computed the external parameters of the cameras and the world coordinates of all observed points. This problem has been treated here:

Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700

However, I wasn't able to find a suitable implementation of this problem (perhaps you will find one). Due to time limitations I did not have time to understand all the maths in this paper and implement it myself, so I came up with a quick-and-dirty solution that works for me. I will explain what I did to solve it:

Assuming we have two cameras, where the first camera has external parameters RT = Matx::eye(). Now make a guess about the the rotation R of the second camera. For every pair of image points observed in both images, we compute the directions of their corresponding rays in world coordinates and store them in a 2d-array dirs (EDIT: The internal camera parameters are assumed to be known). We can do this since we assume that we know the orientation of every camera. Now we build an overdetermined linear system AC = 0 where C is the centre of the second camera. I provide you with the function to compute A:

Mat buildA(Matx<double, 3, 3> &R, Array<Vec3d, 2> dirs)
{
    CV_Assert(dirs.size(0) == 2);
    int pointCount = dirs.size(1);
    Mat A(pointCount, 3, DataType<double>::type);
    Vec3d *a = (Vec3d *)A.data;
    for (int i = 0; i < pointCount; i++)
    {
        a[i] = dirs(0, i).cross(toVec(R*dirs(1, i)));
        double length = norm(a[i]);
        if (length == 0.0)
        {
            CV_Assert(false);
        }
        else
        {
            a[i] *= (1.0/length);
        }
    }
    return A;
}

Then calling cv::SVD::solveZ(A) will give you the least-squares solution of norm 1 to this system. This way, you obtain the rotation and translation of the second camera. However, since I just made a guess about the rotation of the second camera, I make several guesses about its rotation (parameterized using a 3x1 vector omega from which i compute the rotation matrix using cv::Rodrigues) and then I refine this guess by solving the system AC = 0 repetedly in a Levenberg-Marquardt optimizer with numeric jacobian. It works for me but it is a bit dirty, so you if you have time, I encourage you to implement what is explained in the paper.

EDIT:

Here is the routine in the Levenberg-Marquardt optimizer for evaluating the vector of residues:

void Stereo::eval(Mat &X, Mat &residues, Mat &weights)
{

        Matx<double, 3, 3> R2Ref = getRot(X); // Map the 3x1 euler angle to a rotation matrix
        Mat A = buildA(R2Ref, _dirs); // Compute the A matrix that measures the distance between ray pairs
        Vec3d c;
        Mat cMat(c, false);
        SVD::solveZ(A, cMat); // Find the optimum camera centre of the second camera at distance 1 from the first camera
        residues = A*cMat; // Compute the  output vector whose length we are minimizing
    weights.setTo(1.0);
}

By the way, I searched a little more on the internet and found some other code that could be useful for computing the relative orientation between cameras. I haven't tried any code yet, but it seems useful:

http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html

http://lear.inrialpes.fr/people/triggs/src/

http://www.maths.lth.se/vision/downloads/

Are these static cameras which you wish to calibrate for future use as a stereo pair? In this case you would want to use the cv::stereoCalibrate() function. OpenCV contains some sample code, one of which is stereo_calib.cpp which may be worth investigating.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!