I have the cameraMatrix
and the distCoeff
needed to undistort an image or a vector of points. Now I\'d like to distort them back.
Is it poss
If you multiply all the distortion coefficients by -1 you can then pass them to undistort or undistortPoints and basically you will apply the inverse distortion which will bring the distortion back.
There are some points which I found when tried to redistort points using tips from this topic:
xCorrected = x * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2); // Multiply r2 after k3 one more time in yCorrected too
// To relative coordinates <- this is the step you are missing
This is wrong, as the code in this question already useы relative coordinates! It is a trick in OpenCV undistortPoints
functions. It has a new intrinsic matrix as the 6th argument. If it is None, than the function returns points in relative coordinates. And this is why the original code in question has this step:
//Step 2 : ideal coordinates => actual coordinates
xCorrected = xCorrected * fx + ux;
yCorrected = yCorrected * fy + uy;
When I started to study this question, I had the same opinion that these equations undistort points, not the opposite.
Recently I have found why. The tutorial of OpenCV and its documentation has the different names. Tutorial uses variables 'xCorrected' and 'yCorrected' for the equations. While in doc the same things have different names: 'xDistorted' and 'yDistorted'
So let's me solve the confuse: Distortion operation can be represented as equations in various distortion models. But Undistortion is only possible through numerical iteration algorithm. There is no analytical solutions to represent undistortion as equations (Because of 6th order part in radial part and nonlinearity)
There is no analytical solution to this problem once you distort the coordinates there is no way going back at least analytically with this specific model. It is in nature of radial distortion model, the way it is defined allows to distort in simple analytical fashion but not vice versa. In order to do so one has to solve 7-th degree polynomial for which it is proven that there is no analytical solution.
However the radial camera model is not special or sacred in any way it just simple rule that stretches the pixels outwards or inwards to optical center depending on lens you taken your picture with. The closer to optical center the less distortion pixel receives. There is multitude of other ways to define radial distortion model which could yield not only similar quality of distortion but also provide simple way to define the inverse of distortion. But going this way means that you would need to find optimal parameters for such model yourself.
For instance in my specific case I've found that a simple sigmoid function (offset and scaled) is capable to approximating my existing radial model parameters with MSE integral error less than or equal to 1E-06 even though the comparison between model seems pointles. I don't think that native radial model yields better values and must not be treated as etalon one. Physical lens geometry may vary in a way that is not representable by both models and to better approximate lens geometry a mesh-like approach should be used. However I'm impressed by approximated model because it uses only one free parameter and provides notably accurate result which makes me think which model is actually better for the job.
Here's the plot of original radial model (red) and it's sigmoid approximation (green) on top and also their derivatives (blue lines):
So distortion / undistortion function in my case looked like this:
distort = (r, alpha) -> 2/(1 + exp(-alpha*r)) - 1
undistort = (d, alpha) -> -ln((d + 1)/(d - 1))/alpha
(Please note that distortion is performed in polar coordinates around optical center and affects only distance from optical center (i.e. not the angle itself), r - distance from optical center, alpha is a free parameter that needs to be estimated):
Here's how the distortion looked compared to native radial distortion (green is approximated, red is native radial distortion)
And here's how the inverse mapping of pixels looks like if we were to take a regular pixel grid and try to undistort it:
The OCV camera model (see http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) describes how a 3D point first maps to an immaginary ideal pinhole camera coordinate and then "distorts" the coordinate so that it models the image of the actual real world camera.
Using the OpenCV distortion coefficients (= Brown distortion coefficients), the following 2 operations are simple to calculate:
cv::undistort(....)
or alternatively a combination of cv::initUndistortRectifyMap(....)
and cv::remap(....)
.However the following 2 operations are computionally much more complex:
cv::undistortPoints(....)
.This may sound counter intuitive. More detailed explanation:
For a given a pixel coordinate in the distortion-free image it is easy to calculate the corresponding coordinate in the original image (i.e. "distort" the coordinate).
x = (u - cx) / fx; // u and v are distortion free
y = (v - cy) / fy;
rr = x*x + y*y
distortion = 1 + rr * (k1 + rr * (k2 + rr * k3))
# I ommit the tangential parameters for clarity
u_ = fx * distortion * x + cx
v_ = fy * distortion * y + cy
// u_ and v_ are coordinates in the original camera image
Doing it the other way round is much more difficult; basically one would need to combine all the code lines above into one big vectorial equation and solve it for u and v. I think for the general case where all 5 distortion coefficients are used, it can only be done numerically. Which is (without looking at the code) probably what cv::undistortPoints(....)
does.
However, using the distortion coefficients, we can calculate an undistortion-map (cv::initUndistortRectifyMap(....)
) which maps from the distortion-free image coordinates to the original camera image coordinates. Each entry in the undistortion-map contains a (floating point) pixel position in the original camera image. In other words, the undistortion-map points from the distorion-free image into the original camera image. So the map is calculated by exactly the above formula.
The map can then be applied to get the new distortion-free image from the original (cv::remap(....)
). cv::undistort()
does this without the explicit calculation of the undistorion map.
Another way is to use remap to project rectified image to distorted image:
img_distored = cv2.remap(img_rect, mapx, mapy, cv2.INTER_LINEAR)
mapx and mapy are mappings from rectified pixel locations to distorted pixel locations. It can be obtained in below steps:
X, Y = np.meshgrid(range(w), range(h)
pnts_distorted = np.merge(X, Y).reshape(w*h, 2)
pnts_rectified = cv2.undistortPoints(pnts_distorted, cameraMatrix, distort, R=rotation, P=pose)
mapx = pnts_rectified[:,:,0]
mapy = pnts_rectified[:,:,1]
cameraMatrix, distort, rotation, pose are the parameters returned in cv calibration and stereoRectify functions.
You can easily distort back your points using ProjectPoints.
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
rVec.at<double>(0) = 0;
rVec.at<double>(1) = 0;
rVec.at<double>(2) =0;
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) =0;
tVec.at<double>(1) = 0;
tVec.at<double>(2) = 0;
cv::projectPoints(points,rVec,tVec, cameraMatrix, distCoeffs,result);
PS: in the opencv 3 they added a function for distort.