Convert between MATLAB stereoParameters and OpenCV stereoRectify stereo calibration

后端 未结 3 1509
南旧
南旧 2021-02-03 15:07

I wish to convert a MATLAB stereoParameters structure to intrinsics and extrinsics matrices to use in OpenCV\'s stereoRectify.

If I understood http://docs.opencv.org/2.4

相关标签:
3条回答
  • 2021-02-03 15:56

    You can use the stereoRectify function in OpenCV to obtain R1, R2, P1, P2, Q given cameraMatrix1, cameraMatrix2, distCoeffs1, distCoeffs2, R & T.

    In C++ it would be

    cv::Mat R1, R2, P1, P2, Q; cv::Rect validRoi[2]; cv::stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imSize, R, T, R1, R2, P1, P2, Q, CV_CALIB_ZERO_DISPARITY, 0, imSize, &validRoi[0], &validRoi[1]);

    One important thing to note is that the matrices cameraMatrix1, cameraMatrix2 and R need to be transposed when copying them from their MATLAB counterparts.

    (I put this in bold as it cost 2 days to figure out why my rectification wasn't working when I converted it from MATLAB to C++ OpenCV)

    0 讨论(0)
  • 2021-02-03 16:04

    As you already found out, both camera matrices need to be transposed due to a different notation in MATLAB

    and OpenCV

    The same applies for the rotation matrix and the translation vector between the cameras: stereoParams.RotationOfCamera2 and stereoParams.TranslationOfCamera2 need to be transposed in order to obtain OpenCV's R matrix and T vector.

    (Quick validation: R should be close to an identity matrix if the cameras are almost parallel and the first element of T should match your baseline between the cameras in millimeters.)

    OpenCV's distortion coefficient vector is composed of MATLAB's two tangential distortion coefficients followed by the two radial distortion coefficients.

    That said, I was able to compute correct R1, R2, P1, P2 and Q using (R1, R2, P1, P2, Q, leftROI, rightROI) = cv2.stereoRectify(leftCamMatrix, leftDistCoeffs, rightCamMatrix, rightDistCoeffs, imageSize, R, T, None, None, None, None, None, cv2.CALIB_ZERO_DISPARITY, 0)

    Note that for data type reasons, the disparity values obtained using OpenCV's stereo matcher need to be divided by 16 and the coordinates in the 3d point cloud returned by cv2.reprojectImageTo3D need to be divided by 64 to obtain metric values.

    (Quick validation: when grabbing the coordinates of the same object in the rectified left and right image, the y-coordinates should be almost equal and you should be able to compute the object distance in meters with f*B/(x_right-x_left)/1000 with f being a combined focal length of a virtual camera in Q and B the baseline in millimeters.)

    0 讨论(0)
  • 2021-02-03 16:07

    https://stackoverflow.com/a/28317841 gives the formula for the Q matrix:

    Tx is from matrix T. cx, cy and cx' are from the camera matrices. f is some sensible combination of their x and y focal lengths.

    Still dunno how to get P1, P2, R1 and R2 though. Anybody?

    0 讨论(0)
提交回复
热议问题