问题
From my understanding, undistortPoints takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. Likewise, projectPoints maps a set of object coordinates to their corresponding image coordinates.
However, I am unsure if projectPoints maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)?
Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints()'. Could you please explain how this is so?
回答1:
Quote from the 3.2 documentation for projectPoints():
Projects 3D points to an image plane.
The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.
You have the parameter distCoeffs
:
If the vector is empty, the zero distortion coefficients are assumed.
With no distorsion the equation is:
With K
the intrinsic matrix and [R | t]
the extrinsic matrix or the transformation that transforms a point in the object or world frame to the camera frame.
For undistortPoints(), you have the parameter R:
Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by cv::stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.
The reverse transformation is the operation where you compute for a 2D image point ([u, v]
) the corresponding 3D point in the normalized camera frame ([x, y, z=1]
) using the intrinsic parameters.
With the extrinsic matrix, you can get the point in the camera frame:
The normalized camera frame is obtained by dividing by the depth:
Assuming no distortion, the image point is:
And the "reverse transformation" assuming no distortion:
来源:https://stackoverflow.com/questions/42361959/difference-between-undistortpoints-and-projectpoints-in-opencv