I am trying to calibrate a camera with a fisheye lens. I therefor used the fisheye lens module, but keep getting strange results no matter what distortion parameters I fix. This is the input image I use: https://i.imgur.com/apBuAwF.png
where the red circles indicate the corners I use to calibrate my camera.
This is the best I could get, output: https://imgur.com/a/XeXk5
I currently don't know by heart what the camera sensor dimensions are, but based on the focal length in pixels that is being calculated in my nitrinsic matrix, I deduce my sensor size is approximately 3.3mm (assuming my physical focal length is 1.8mm), which seems realistic to me. Yet, when undistorting my input image I get nonsense. Could someone tell me what I may be doing incorrectly?
the matrices and rms being output by the calibration:
K:[263.7291703200009, 0, 395.1618975493187;
0, 144.3800397321767, 188.9308218101271;
0, 0, 1]
D:[0, 0, 0, 0]
rms: 9.27628
my code:
#include <opencv2/opencv.hpp>
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/ccalib/omnidir.hpp"
using namespace std;
using namespace cv;
vector<vector<Point2d> > points2D;
vector<vector<Point3d> > objectPoints;
Mat src;
//so that I don't have to select them manually every time
void initializePoints2D()
{
points2D[0].push_back(Point2d(234, 128));
points2D[0].push_back(Point2d(300, 124));
points2D[0].push_back(Point2d(381, 126));
points2D[0].push_back(Point2d(460, 127));
points2D[0].push_back(Point2d(529, 137));
points2D[0].push_back(Point2d(207, 147));
points2D[0].push_back(Point2d(280, 147));
points2D[0].push_back(Point2d(379, 146));
points2D[0].push_back(Point2d(478, 153));
points2D[0].push_back(Point2d(551, 165));
points2D[0].push_back(Point2d(175, 180));
points2D[0].push_back(Point2d(254, 182));
points2D[0].push_back(Point2d(377, 185));
points2D[0].push_back(Point2d(502, 191));
points2D[0].push_back(Point2d(586, 191));
points2D[0].push_back(Point2d(136, 223));
points2D[0].push_back(Point2d(216, 239));
points2D[0].push_back(Point2d(373, 253));
points2D[0].push_back(Point2d(534, 248));
points2D[0].push_back(Point2d(624, 239));
points2D[0].push_back(Point2d(97, 281));
points2D[0].push_back(Point2d(175, 322));
points2D[0].push_back(Point2d(370, 371));
points2D[0].push_back(Point2d(578, 339));
points2D[0].push_back(Point2d(662, 298));
for(int j=0; j<25;j++)
{
circle(src, points2D[0].at(j), 5, Scalar(0, 0, 255), 1, 8, 0);
}
imshow("src with circles", src);
waitKey(0);
}
int main(int argc, char** argv)
{
Mat srcSaved;
src = imread("images/frontCar.png");
resize(src, src, Size(), 0.5, 0.5);
src.copyTo(srcSaved);
vector<Point3d> objectPointsRow;
vector<Point2d> points2DRow;
objectPoints.push_back(objectPointsRow);
points2D.push_back(points2DRow);
for(int i=0; i<5;i++)
{
for(int j=0; j<5;j++)
{
objectPoints[0].push_back(Point3d(5*j,5*i,1));
}
}
initializePoints2D();
cv::Matx33d K;
cv::Vec4d D;
std::vector<cv::Vec3d> rvec;
std::vector<cv::Vec3d> tvec;
int flag = 0;
flag |= cv::fisheye::CALIB_RECOMPUTE_EXTRINSIC;
flag |= cv::fisheye::CALIB_CHECK_COND;
flag |= cv::fisheye::CALIB_FIX_SKEW;
flag |= cv::fisheye::CALIB_FIX_K1;
flag |= cv::fisheye::CALIB_FIX_K2;
flag |= cv::fisheye::CALIB_FIX_K3;
flag |= cv::fisheye::CALIB_FIX_K4;
double rms =cv::fisheye::calibrate(
objectPoints, points2D, src.size(),
K, D, rvec, tvec, flag, cv::TermCriteria(3, 20, 1e-6)
);
Mat output;
cerr<<"K:"<<K<<endl;
cerr<<"D:"<<D<<endl;
cv::fisheye::undistortImage(srcSaved, output, K, D);
cerr<<"rms: "<<rms<<endl;
imshow("output", output);
waitKey(0);
cerr<<"image .size: "<<srcSaved.size()<<endl;
}
If anybody has an idea, feel free to either share some code in Python either in C++. Whatever floats your boat.
EDIT:
As you may have notice I don't use a black and white checkerboard for the calibration, but corners from tiles constituting my carpet. At the end of the day the goal -I think- is to get corner coordinates which represent samples from the distortion radii . The carpet is to some extent the same as the checkerboard, the only difference -once again I think- is the fact that you have less high frequency edges at those eg corners on the carpet than on a black and white checkerboard.
I know the number of pictures is very limited, ie only 1. I expect the image to be undistorted to some extent, but I also expect the undistortion to be very well done. But in this case the image output looks like total nonsense.
I ended up using this image with a chessboard: https://imgur.com/a/WlLBR provided by this website: https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page But results are still very poor: diagonal lines like the other output image I posted.
Thanks
Your first problem is that you are only using one image. Even if you had an ideal pinhole camera with no distortion, you would not be able to estimate the intrinsics from a single image of co-planar points. One image of co-planar points simply does not give you enough constraints to solve for the intrinsics.
You need at least two images at different 3D orientations, or a 3D calibration rig, where the points are not co-planar. Of course, in practice you need at least 20 images for accurate calibration.
Your second problem is that you are using a carpet as the checkerboard. You need to be able to detect the points in the image with sub-pixel accuracy. Small localization errors result in large errors in the estimated camera parameters. I seriously doubt that you can detect the corners of the squares of your carpet with any reasonable accuracy. In fact, you cannot even measure the actual point locations on the carpet very accurately, because it is fuzzy.
Best of luck!
来源:https://stackoverflow.com/questions/46225943/how-to-correctly-calibrate-my-camera-with-a-wide-angle-lens-using-opencv