camera-calibration

OpenCV StereoSGBM gives really bad disparity map

安稳与你 提交于 2019-12-08 10:08:42
问题 these is my calibration code: void calibrate() { int numBoards = 10; int board_w = 6; int board_h = 9; Size board_sz = Size(board_w, board_h); int board_n = board_w*board_h; vector<vector<Point3f> > object_points; vector<vector<Point2f> > imagePoints1, imagePoints2; vector<Point2f> corners1, corners2; vector<Point3f> obj; for (int j=0; j<board_n; j++) { obj.push_back(Point3f(j/board_w, j%board_w, 0.0f)); } Mat img1, img2, gray1, gray2; VideoCapture cap1(0); VideoCapture cap2(1); int success =

Back projecting 3D world point to new view image plane

不问归期 提交于 2019-12-08 09:03:06
问题 EDIT: What I have: camera intrinsics, extrinsic from calibration, 2D image and depth map What I need: 2D virtual view image I am trying to generate a novel view(right view) for Depth Image Based Rendering. Reason for this is that only the left image and depth map are available at the receiver which has to reconstruct the right view (see image). I want to know if these steps will give me the desired result or what I should be doing instead, First, by using Camera Calibration toolbox for MATLAB

Difference between undistortPoints() and projectPoints() in OpenCV

耗尽温柔 提交于 2019-12-08 03:30:00
问题 From my understanding, undistortPoints takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. Likewise, projectPoints maps a set of object coordinates to their corresponding image coordinates. However, I am unsure if projectPoints maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)? Furthermore, the OpenCV

Fisheye distortion rectification with lookup table

感情迁移 提交于 2019-12-08 02:32:27
问题 I have a fisheye lens: I would like to undistort it. I apply the FOV model: rd = 1 / ω * arctan (2 * ru * tan(ω / 2)) //Equation 13 ru = tan(rd * ω) / 2 / tan(ω / 2) //Equation 14 as found in equations (13) and (14) of the INRIA paper "Straight lines have to be straight" https://hal.inria.fr/inria-00267247/document. The code implementation is the following: Point2f distortPoint(float w, float h, float cx, float cy, float omega, Point2f input) { //w = width of the image //h = height of the

CV: Difference between MATLAB and OpenCV camera calibration techniques

依然范特西╮ 提交于 2019-12-07 12:22:45
问题 I calibrated a camera with checkerboard pattern using OpenCV and MATLAB. I got .489 and . 187 for Mean Re-projection errors in OpenCV and MATLAB respectively. From the looks of it, MATLAB is more precise. But my adviser feels both MATLAB and OpenCV use the same BOUGET's algorithm and should report same error (or close). Is it so ? Can someone explain the difference b/w MATLAB and OpenCV camera calibration methods ? Thanks! 回答1: Your adviser is correct in that both MATLAB and OpenCV use

Bad results when undistorting points using OpenCV in Python

爷,独闯天下 提交于 2019-12-07 08:42:54
问题 I'm having trouble undistorting points on an image taken with a calibrated camera using the Python bindings for OpenCV. The undistorted points have entirely different coordinates than the original points detected in the image. Here's the offending call: undistorted = cv2.undistortPoints(image_points, camera_matrix, distortion_coefficients) where image_points is a numpy array of detected chessboard corners returned by cv2.findChessboardCorners and reshaped to match the dimensional requirements

Camera calibration (OpenCV 2.3) - how to use the distortion parameters?

…衆ロ難τιáo~ 提交于 2019-12-07 03:18:17
问题 I have a set of images of a rigid body with some attached markers. I defined a coordinate system with origin in one of these markers and I want to get the rotation and translation between this coordinate system and the one defined at the camera's origin. I tried for some time POSIT (following this) without ever getting acceptable results, until I realized that I had to calibrate the camera in first place. Based on this and using some images acquired with a calibration body, I got the camera

how to calculate field of view of the camera from camera intrinsic matrix?

天大地大妈咪最大 提交于 2019-12-07 00:47:01
问题 I got camera intrinsic matrix and distortion parameters using camera calibration. The unit of the focal length is pixels, i guess. Then, how can i calculate field of view (along y) ? Is this formula right? double fov_y = 2*atan(height/2/fy)*180/CV_PI; I'll use it to parameters of gluPerspective() 回答1: OpenCV has a function that does this. Looking at the implementation (available on GitHub) we have given an image with dimensions w x h and a camera matrix: the equations for the field of view

Difference between undistortPoints() and projectPoints() in OpenCV

霸气de小男生 提交于 2019-12-06 15:58:46
From my understanding, undistortPoints takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. Likewise, projectPoints maps a set of object coordinates to their corresponding image coordinates. However, I am unsure if projectPoints maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)? Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints(

Laser Projector Calibration in 3D Space

亡梦爱人 提交于 2019-12-06 15:54:46
I am working on a solution for calibrating a laser projector in the real world. There are a few goals for this project. 1. Take in a minimum of four points measured in the real world in 3d space that represent the projection surface. 2. Take in coordinates from the laser that are equivalent to the points received in part 1 3. Determine if the calibration file matches the real world captured points and show a deviation between the coordinate spaces 4. Using the data from the previous steps, take coordinates in 3d real world space and translate them to laser coordinates. Example: A rectangular