camera-calibration

OpenCV: get perspective matrix from translation & rotation

和自甴很熟 提交于 2019-11-28 18:55:33
I'm trying to verify my camera calibration, so I'd like to rectify the calibration images. I expect that this will involve using a call to warpPerspective but I do not see an obvious function that takes the camera matrix, and the rotation and translation vectors to generate the perspective matrix for this call. Essentially I want to do the process described here (see especially the images towards the end) but starting with a known camera model and pose. Is there a straightforward function call that takes the camera intrinsic and extrinsic parameters and computes the perspective matrix for use

How to verify that the camera calibration is correct? (or how to estimate the error of reprojection)

[亡魂溺海] 提交于 2019-11-28 18:29:19
The quality of calibration is measured by the reprojection error (is there an alternative?), which requires a knowledge world coordinates of some 3d point(s). Is there a simple way to produce such known points? Is there a way to verify the calibration in some other way (for example, Zhang's calibration method only requires that the calibration object be planar and the geometry of the system need not to be known) You can verify the accuracy of the estimated nonlinear lens distortion parameters independently of pose. Capture images of straight edges (e.g. a plumb line, or a laser stripe on a

To calculate world coordinates from screen coordinates with OpenCV

无人久伴 提交于 2019-11-28 17:15:22
问题 I have calculated the intrinsic and extrinsic parameters of the camera with OpenCV. Now, I want to calculate world coordinates (x,y,z) from screen coordinates (u,v). How I do this? N.B. as I use the kinect, I already know the z coordinate. Any help is much appreciated. Thanks! 回答1: First to understand how you calculate it, it would help you if you read some things about the pinhole camera model and simple perspective projection. For a quick glimpse, check this. I'll try to update with more.

How to correctly calibrate my camera with a wide angle lens using openCV?

半腔热情 提交于 2019-11-28 14:08:44
I am trying to calibrate a camera with a fisheye lens. I therefor used the fisheye lens module, but keep getting strange results no matter what distortion parameters I fix. This is the input image I use: https://i.imgur.com/apBuAwF.png where the red circles indicate the corners I use to calibrate my camera. This is the best I could get, output: https://imgur.com/a/XeXk5 I currently don't know by heart what the camera sensor dimensions are, but based on the focal length in pixels that is being calculated in my nitrinsic matrix, I deduce my sensor size is approximately 3.3mm (assuming my

How to calculate extrinsic parameters of one camera relative to the second camera?

社会主义新天地 提交于 2019-11-28 03:44:05
问题 I have calibrated 2 cameras with respect to some world coordinate system. I know rotation matrix and translation vector for each of them relative to the world frame. From these matrices how to calculate rotation matrix and translation vector of one camera with respect to the other?? Any help or suggestion please. Thanks! 回答1: First convert your rotation matrix into a rotation vector. Now you have 2 3d vectors for each camera, call them A1,A2,B1,B2. You have all 4 of them with respect to some

OpenCV fisheye calibration cuts too much of the resulting image

社会主义新天地 提交于 2019-11-27 23:52:36
问题 I am using OpenCV to calibrate images taken using cameras with fish-eye lenses. The functions I am using are: findChessboardCorners(...); to find the corners of the calibration pattern. cornerSubPix(...); to refine the found corners. fisheye::calibrate(...); to calibrate the camera matrix and the distortion coefficients. fisheye::undistortImage(...); to undistort the images using the camera info obtained from calibration. While the resulting image does appear to look good (straight lines and

Camera pose estimation (OpenCV PnP)

只愿长相守 提交于 2019-11-27 19:54:07
I am trying to get a global pose estimate from an image of four fiducials with known global positions using my webcam. I have checked many stackexchange questions and a few papers and I cannot seem to get a a correct solution. The position numbers I do get out are repeatable but in no way linearly proportional to camera movement. FYI I am using C++ OpenCV 2.1. At this link is pictured my coordinate systems and the test data used below. % Input to solvePnP(): imagePoints = [ 481, 831; % [x, y] format 520, 504; 1114, 828; 1106, 507] objectPoints = [0.11, 1.15, 0; % [x, y, z] format 0.11, 1.37, 0

How to understand the KITTI camera calibration files?

拈花ヽ惹草 提交于 2019-11-27 16:05:56
问题 I am working on the KITTI dataset. I have downloaded the object dataset (left and right) and camera calibration matrices of the object set. I want to use the stereo information. But I don't know how to obtain the Intrinsic Matrix and R|T Matrix of the two cameras. And I don't understand what the calibration files mean. The contents of a calibration file: P0: 7.070493000000e+02 0.000000000000e+00 6.040814000000e+02 0.000000000000e+00 0.000000000000e+00 7.070493000000e+02 1.805066000000e+02 0

FindChessboardCorners cannot detect chessboard on very large images by long focal length lens

喜夏-厌秋 提交于 2019-11-27 13:18:35
I can use FindChessboardCorners functions for images that less than 15 Mega pixel such like 2k x 1.5k. however when I use it on the image from DSLR, the resolution at 3700x5300, it doesn't work. I tried to use resize() to reduce the image size directly, then it works. Obviously there's some hard coded or bug in the OpenCV source code. Could you help me to figure it out, or point me to a patch for this ? I found someone posted a similar issue in 2006, here , so it looks like the problem still remains. The code I used is like found = findChessboardCorners( viewGray, boardSize, ptvec, CV_CALIB_CB

Augmented Reality OpenGL+OpenCV

感情迁移 提交于 2019-11-27 12:52:02
问题 I am very new to OpenCV with a limited experience on OpenGL. I am willing to overlay a 3D object on a calibrated image of a checkerboard. Any tips or guidance? 回答1: The basic idea is that you have 2 cameras: one is the physical one (the one where you are retriving the images with opencv) and one is the opengl one. You have to align those two matrices. To do that, you need to calibrate the physical camera. First. You need a distortion parameters (because every lens more or less has some