问题
I'm receiving depth images of a tof camera
via MATLAB
. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV
function, which are implemented in MATLAB
via mex-files.
But later on I can't use those drivers anymore nor use openCV
functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV
function.
... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.
greetings!
回答1:
The equations of the projection of a 3D point [X; Y; Z]
to a 2D image point [u; v]
are provided on the documentation page related to camera calibration :
(source: opencv.org)
In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints
, with tuned parameters making the optimization fast but a little inaccurate.
However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.
Basically, the algorithm behind function undistort
is the following. For each pixel of the destination lens-corrected image do:
- Convert the pixel coordinates
(u_dst, v_dst)
to normalized coordinates(x', y')
using the inverse of the calibration matrixK
, - Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates
(x'', y'')
, - Convert
(x'', y'')
to distorted pixel coordinates(u_src, v_src)
using the calibration matrixK
, - Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates
(u_src, v_src)
in the source image, and assign this intensity/depth to the current destination pixel.
Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.
来源:https://stackoverflow.com/questions/21958521/understanding-of-opencv-undistortion