Understanding of openCV undistortion

牧云@^-^@ 提交于 2019-11-29 21:16:33

问题


I'm receiving depth images of a tof camera via MATLAB. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files.

But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV function.

... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.

greetings!


回答1:


The equations of the projection of a 3D point [X; Y; Z] to a 2D image point [u; v] are provided on the documentation page related to camera calibration :


(source: opencv.org)

In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints, with tuned parameters making the optimization fast but a little inaccurate.

However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.

Basically, the algorithm behind function undistort is the following. For each pixel of the destination lens-corrected image do:

  • Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
  • Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates (x'', y''),
  • Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
  • Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel.

Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.



来源:https://stackoverflow.com/questions/21958521/understanding-of-opencv-undistortion

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!