camera-calibration

Depth map from calibrated image and triangular mesh using OpenCV and Matlab

我是研究僧i 提交于 2019-12-06 15:37:11
I want to extract a depth map from a calibrated image and triangular mesh using OpenCV called from Matlab 2014b (using the OpenCV bindings). I am a regular user of Matlab but am new to OpenCV. I have the following inputs: im - an undistorted RGB image of the scene or - camera position vector R - rotation matrix describing camera pose points - nx3 triangular mesh vertex list representing the scene faces - mx3 triangular mesh face list EFL - image effective focal length in pixels I have written a native Matlab ray tracing engine to extract a depth map from these inputs, but this is quite slow

OpenCV calibrateCamera assertion failed

妖精的绣舞 提交于 2019-12-06 14:27:44
问题 So I am having basically the same issue described in this post from last year, and am not getting anywhere with solving it. I am calling calibrateCamera and am getting the error "Assertion failed (nimages > 0 && nimages == (int) imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in cv::collectCalibrationData". The line of code that is getting this error is: double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, s

How project Velodyne point clouds on image? (KITTI Dataset)

只愿长相守 提交于 2019-12-06 08:16:11
Here is my code to project Velodyne points into the images: cam = 2; frame = 20; % compute projection matrix velodyne->image plane R_cam_to_rect = eye(4); [P, Tr_velo_to_cam, R] = readCalibration('D:/Shared/training/calib/',frame,cam) R_cam_to_rect(1:3,1:3) = R; P_velo_to_img = P*R_cam_to_rect*Tr_velo_to_cam; % load and display image img = imread(sprintf('D:/Shared/training/image_2/%06d.png',frame)); fig = figure('Position',[20 100 size(img,2) size(img,1)]); axes('Position',[0 0 1 1]); imshow(img); hold on; % load velodyne points fid = fopen(sprintf('D:/Shared/training/velodyne/%06d.bin',frame

Doubts in camera calibration

瘦欲@ 提交于 2019-12-06 00:31:58
问题 I am working on a project based on machine vision . Wide angle lens with high resolution pinhole camera is being used. Working distance : Distance between Camera and object . The resolution will be nearly 10MP. The image size may be 3656 pixel width and 2740 pixel height. The project requirements are as mentioned below My working distance must be nearly 5metres. The camera needs to be tilted at an angle of 13 degree. To avoid lens distortion in camera I do camera calibration using OpenCV.

CV: Difference between MATLAB and OpenCV camera calibration techniques

笑着哭i 提交于 2019-12-05 21:31:53
I calibrated a camera with checkerboard pattern using OpenCV and MATLAB. I got .489 and . 187 for Mean Re-projection errors in OpenCV and MATLAB respectively. From the looks of it, MATLAB is more precise. But my adviser feels both MATLAB and OpenCV use the same BOUGET's algorithm and should report same error (or close). Is it so ? Can someone explain the difference b/w MATLAB and OpenCV camera calibration methods ? Thanks! Your adviser is correct in that both MATLAB and OpenCV use essentially the same calibration algorithm. However, MATLAB uses the Levenberg-Marquardt non-linear least squares

Bad results when undistorting points using OpenCV in Python

北战南征 提交于 2019-12-05 13:28:05
I'm having trouble undistorting points on an image taken with a calibrated camera using the Python bindings for OpenCV. The undistorted points have entirely different coordinates than the original points detected in the image. Here's the offending call: undistorted = cv2.undistortPoints(image_points, camera_matrix, distortion_coefficients) where image_points is a numpy array of detected chessboard corners returned by cv2.findChessboardCorners and reshaped to match the dimensional requirements of cv2.undistortPoints , and camera_matrix and distortion_coefficients were returned by cv2

Camera calibration, reverse projection of pixel to direction

末鹿安然 提交于 2019-12-05 10:53:55
I am using OpenCV to estimate a webcam's intrinsic matrix from a series of chessboard images - as detailed in this tutorial , and reverse project a pixel to a direction (in term of azimuth/elevation angles). The final goal is to let the user select a point on the image, estimate the direction of this point in relation to the center of the webcam, and use this as DOA for a beam-forming algorithm. So once I have estimated the intrinsic matrix, I reverse project the user-selected pixel (see code below) and display it as azimuth/elevation angles. result = [0, 0, 0] # reverse projected point, in

how to calculate field of view of the camera from camera intrinsic matrix?

笑着哭i 提交于 2019-12-05 06:56:47
I got camera intrinsic matrix and distortion parameters using camera calibration. The unit of the focal length is pixels, i guess. Then, how can i calculate field of view (along y) ? Is this formula right? double fov_y = 2*atan(height/2/fy)*180/CV_PI; I'll use it to parameters of gluPerspective() OpenCV has a function that does this. Looking at the implementation (available on GitHub ) we have given an image with dimensions w x h and a camera matrix: the equations for the field of view are: 来源: https://stackoverflow.com/questions/39992968/how-to-calculate-field-of-view-of-the-camera-from

OpenCV calibrateCamera() Assertion failed

空扰寡人 提交于 2019-12-05 05:50:02
问题 I have been trying to calibrate my camera quite for a while using Opencv calibrateCamera() function. I have followed the same procedure as described in opencv sample program. I am trying to first load 10 9 x 6 chessboard images. Then finding chessboard corners. If corners are found then corners' pixel location is stored in vector< vector < Point2f>> ImagePoints. After doing this for all images, runCalibrationAndSave part is executed. In runCalibrationAndSave , first runCalibration part is

image coordinate to world coordinate opencv

狂风中的少年 提交于 2019-12-05 05:38:22
I calibrated my mono camera using opencv. Now I know the camera intrinsic matrix and distortion coefs [K1, K2, P1 ,P2,K3 ,K4, K5, K6] of my camera. Assuming that camera is place in [x, y, z] with [Roll, Pitch, Yaw] rotations. how can I get each pixel in world coordinate when the camera is looking on the floor [z=0]. You say that you calibrated your camera which gives you: Intrinsic parameters Extrinsic parameters (rotation, translation) Distortion coefficients First, to compensate for the distortion, you can use the undistort function and get an undistorted image. Now, what you are left with