问题
we have a ELP 1.0 Megapixel Dual Lens Usb Stereo camera and we are trying to calibrate it using OpenCV 3.1 in C++. However, the result of the calibration is totally unusable, because calling stereoRectify totally twistes the image. This is what we do:
Finding calibration (chessboard) pattern in both cameras, chessboard size is 5x7 and the result is almost same regardless the number of images taken
findChessboardCorners(img[k], boardSize, corners, CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE)
cornerSubPix(img[k], corners, Size(11, 11), Size(-1, -1), TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01));
All chessboards are correctly detected that is verified using
drawChessboardCorners(img[k], boardSize, corners, bFound);
Then we calibrate each camera separately (but this step seems not to be important for stereo calibration), but we can use it to verify each camera separately
calibrateCamera(objectPoints, imagePoints[k], Size(320, 240), cameraMatrix[k], distCoeffs[k], rvecs, tvecs, 0)
Then we do stereo calibration
stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1], cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1],
Size(320, 240), R, T, E, F, CALIB_USE_INTRINSIC_GUESS);
Compute the rectification transform
stereoRectify(cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1], Size(320, 240), R, T, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 1, Size(320, 240), &validRoI[0], &validRoI[1]);
Initialize maps for remap
Mat rmap[2][2];
initUndistortRectifyMap(cameraMatrix[0], distCoeffs[0], R1, P1, Size(FRAME_WIDTH, FRAME_HEIGHT), CV_16SC2, rmap[0][0], rmap[0][1]);
initUndistortRectifyMap(cameraMatrix[1], distCoeffs[1], R2, P2, Size(FRAME_WIDTH, FRAME_HEIGHT), CV_16SC2, rmap[1][0], rmap[1][1]);
...
remap(img, rimg, rmap[k][0], rmap[k][1], INTER_LINEAR);
imshow("Canvas", rimg);
The result is totally distorted image. As I said at the beginning, all calibration/chessboard patterns are correctly detected and if we don't call stereoRectify function, the undistorted images (after remap) look perfect. The problem comes if we call stereoRectify function.
Is there something we missed out? The number of calibration images does not seem to have any effect (sometimes taking 2 images provides better result (but still not usable) than 10 images)
This is the example of calibration pattern. We take several different orientations:
This is the result of the calibration if we do not call stereoRectify:
This is the wrong result if we call stereoRectify (but mostly it gets much worse):
Thanks in advance for any help what could be wrong.
回答1:
Hey have you tried changing the value of parameter alpha in function stereoRectify. I remember that once I also obtained such results and changing the value of alpha to 0 did the work for me. Please let me know the results you obtain with alpha = -1, alpha=0.5 and alpha =0
回答2:
Just to sumarize if someone needs similar help, this is what I do to get the best lookable result:
Upscale the chessboard image before corner detection:
Mat resized;
resize(img[k], resized, Size(FRAME_WIDTH * 2, FRAME_HEIGHT * 2), 0.0, 0.0, INTER_LINEAR);
findChessboardCorners(resized, boardSize, corners, CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE
Downscale the detected corners:
for (int i = 0; i < corners.size(); ++i) {
corners[i].x /= 2.0;
corners[i].y /= 2.0;
}
Calibrate each camera separately:
double rms = calibrateCamera(objectPoints, imagePoints[k], Size(FRAME_WIDTH, FRAME_HEIGHT), cameraMatrix[k], distCoeffs[k], rvecs, tvecs,
CALIB_FIX_PRINCIPAL_POINT | CALIB_FIX_ASPECT_RATIO | CALIB_ZERO_TANGENT_DIST | CALIB_RATIONAL_MODEL | CALIB_FIX_K3 | CALIB_FIX_K4 | CALIB_FIX_K5);
Calibrate stereo camera:
stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1], cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1],
Size(FRAME_WIDTH, FRAME_HEIGHT), R, T, E, F,
CALIB_FIX_INTRINSIC | CALIB_SAME_FOCAL_LENGTH,
TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30, 0));
Compute rectification (with alpha = 0.0):
stereoRectify(cameraMatrix[0], distCoeffs[0], cameraMatrix[1], distCoeffs[1], Size(FRAME_WIDTH, FRAME_HEIGHT),
R, T, R1, R2, P1, P2, Q,
CALIB_ZERO_DISPARITY, 0.0, Size(FRAME_WIDTH, FRAME_HEIGHT), &validRoI[0], &validRoI[1]);
These are the calibration result matrices
Intrinsics:
M1: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 2.6187262304487734e+02, 0., 1.5950000000000000e+02, 0.,
2.6187262304487734e+02, 1.1950000000000000e+02, 0., 0., 1. ]
D1: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ -4.6768074176991381e-01, 2.0221327568191746e-01, 0., 0., 0. ]
M2: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 2.6400975025525213e+02, 0., 1.5950000000000000e+02, 0.,
2.6400975025525213e+02, 1.1950000000000000e+02, 0., 0., 1. ]
D2: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ -4.5713211677198845e-01, 2.8855737500717565e-01, 0., 0., 0. ]
Extrinsics:
R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 9.9963073433190641e-01, 4.6310793035473068e-04,
2.7169477545556639e-02, -6.9475632716349024e-04,
9.9996348636555088e-01, 8.5172324905818230e-03,
-2.7164541091274301e-02, -8.5329635354663789e-03,
9.9959455592785362e-01 ]
T: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [ -6.1830090720273198e+01, 1.6774590574449604e+00,
1.8118983433925613e+00 ]
My another question is whether there are any special requests for variable initializations or is this enough?
Mat cameraMatrix[2] = { Mat::eye(3, 3, CV_64F), Mat::eye(3, 3, CV_64F) };
Mat distCoeffs[2], R, T, E, F, R1, R2, P1, P2, Q;
来源:https://stackoverflow.com/questions/39852273/opencv-stereorectify-distorts-image