Determine extrinsic camera with opencv to opengl with world space object

99封情书 提交于 2019-11-28 23:25:43

问题


I'm using opencv and openframeworks (ie. opengl) to calculate a camera (world transform and projection matrixes) from an image (and later, several images for triangulation).

For the purposes of opencv, the "floor plan" becomes the object (ie. the chessboard) with 0,0,0 the center of the world. The world/floor positions are known so I need to get the projection information (distortion coefficients, fov etc) and the extrinsic coordinates of the camera.

I have mapped the view-positions of these floor-plan points onto my 2D image in normalised view-space ([0,0] is top-left. [1,1] is bottom right).

The Object (floor plan/world points) is on xz plane, -y up, so I convert to the xy plane (Not sure here if z-up is negative or positive...) for opencv as it needs to be planar

ofMatrix4x4 gWorldToCalibration(
    1, 0, 0, 0,
    0, 0, 1, 0,
    0, 1, 0, 0,
    0, 0, 0, 1
    );

I pass 1,1 as the ImageSize to calibrateCamera. flags are CV_CALIB_FIX_ASPECT_RATIO|V_CALIB_FIX_K4|CV_CALIB_FIX_K5 calibrateCamera runs successfully, gives me a low-error (usually around 0.003).

using calibrationMatrixValues I get a sensible FOV, usually around 50 degrees, so I'm pretty sure the intrinsic properties are correct.

Now to calculate the extrinsic world-space transform of the camera... I don't believe I need to use solvePnP as I only have one object (although I tried all this before with it and came back with the same results)

//  rot and trans output...
cv::Mat& RotationVector = ObjectRotations[0];
cv::Mat& TranslationVector = ObjectTranslations[0];

//  convert rotation to matrix
cv::Mat expandedRotationVector;
cv::Rodrigues(RotationVector, expandedRotationVector);

//  merge translation and rotation into a model-view matrix
cv::Mat Rt = cv::Mat::zeros(4, 4, CV_64FC1);
for (int y = 0; y < 3; y++)
   for (int x = 0; x < 3; x++) 
        Rt.at<double>(y, x) = expandedRotationVector.at<double>(y, x);
Rt.at<double>(0, 3) = TranslationVector.at<double>(0, 0);
Rt.at<double>(1, 3) = TranslationVector.at<double>(1, 0);
Rt.at<double>(2, 3) = TranslationVector.at<double>(2, 0);
Rt.at<double>(3, 3) = 1.0;

Now I've got a rotation & transform matrix, but it's column major (I believe as the object is totally skewed if I don't transpose, and the code above looks column major to me)

//  convert to openframeworks matrix AND transpose at the same time
ofMatrix4x4 ModelView;
for ( int r=0;  r<4;    r++ )
    for ( int c=0;  c<4;    c++ )
        ModelView(r,c) = Rt.at<double>( c, r ); 

Swap my planes back to my-coordinate space (y up) using the inverse of the matrix before.

//  swap y & z planes so y is up
ofMatrix4x4 gCalibrationToWorld = gWorldToCalibration.getInverse();
ModelView *= gCalibrationToWorld;

Not sure if I NEED to do this... I didn't negate the planes when I put them INTO the calibration...

//  invert y and z planes for -/+ differences between opencv and opengl
ofMatrix4x4 InvertHandednessMatrix(
    1,  0,  0, 0,
    0,  -1, 0, 0,
    0,  0,  -1, 0,
    0,  0,  0,  1
    );
ModelView *= InvertHandednessMatrix;

And finally, the model view is object-relative-to-camera and I want to invert it to be camera-relative-to-object(0,0,0)

ModelView = ModelView.getInverse();

This results in a camera in the wrong place, and rotated wrong. It's not too far off, the camera is the right side of the Y plane, the translation isn't wildly off, and I think it's the right way up.... just not correct yet. The paint-drawn blue circle is where I expect the camera to be.

I've gone through loads of SO answers, the documentation a dozen times, but not quite found anything right, I'm pretty sure I've covered everything I need to space-conversion-wise, but maybe I've missed something obvious? Or doing something in the wrong order?

Update 1 - world-space plane... I've changed my world-space floor plane to XY(Z up) to match the input for openCV. (gWorldToCalibration is now an identity matrix). The rotation is still wrong, and the projection output is the same, but I think the translation is correct now (It's certainly on the correct side of the markers)

Update2 - Real image size I'm playing with the image size going into the camera calibration; seeing as I'm using 1,1 which is normalised, but the imageSize parameter is in integers, I thought this might be significant... And I guess it is (The red box is where the projected view-space points intersect with z=0 floor plane) Without any distortion correction, here is the result (Only thing changed is imagesize from 1,1 to 640,480. I multiply my normalised input-view-space coords by 640,480 too)

I'm going to try and add distortion correction to see if it lines up perfectly...

回答1:


The first thing to check is to verify that markers are reprojected correctly on the image given estimated intrinsic and extrinsic camera matrices. Then you can find camera position in the global frame and to see if it agrees with markers positions. (Use coordinate system as in OpenCV.) One this is done, there are not much things that can go wrong. Since you want points to lie on the xz plane, you need just a single transformation of coordinates. As I see, you do it with gWorldToCalibration matrix. Then apply the transformation both to markers and to camera position and verify that markers are in the right place. Then the camera position will be also correct (unless something went wrong with the handedness of coordinate system, but can be easily corrected).




回答2:


For now at least I'm treating my Edit 2 (ImageSize must be something bigger than 1,1) as the fix as it produced results much, much more like what I was expecting.

I might have things upside down at the moment, but this is producing pretty good results.




回答3:


I think you should not take the inverse of gWorldToCalibration

ofMatrix4x4 gCalibrationToWorld = gWorldToCalibration.getInverse();

Here I posted code, which is doing more or less what you want OpenCV- to OpenGL COS. It's in C, but should be similar in C++.



来源:https://stackoverflow.com/questions/15530168/determine-extrinsic-camera-with-opencv-to-opengl-with-world-space-object

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!