camera-calibration

How can I transform an image using matrices R and T (extrinsic parameters matrices) in opencv?

笑着哭i 提交于 2019-12-18 17:28:06
问题 I have a rotation-translation matrix [R T] (3x4). Is there a function in opencv that performs the rotation-translation described by [R T]? 回答1: A lot of solutions to this question I think make hidden assumptions. I will try to give you a quick summary of how I think about this problem (I have had to think about it a lot in the past). Warping between two images is a 2 dimensional process accomplished by a 3x3 matrix called a homography. What you have is a 3x4 matrix which defines a transform

input arguments of python's cv2.calibrateCamera

廉价感情. 提交于 2019-12-18 13:31:52
问题 I get the following error when I try to calibrate camera using cv2.calibrateCamera: rms, camera_matrix, dist_coefs, rvecs, tvecs = cv2.calibrateCamera(pts3d, pts2d, self.imgsize, None, None) cv2.error: /home/sarkar/opencv/opencv/modules/calib3d/src/calibration.cpp:2976: error: (-210) objectPoints should contain vector of vectors of points of type Point3f in function collectCalibrationData I initially had nx3 and nx2 array for pts3d and pts2d. I then tried to reshape pts3d and pts2d in the

Python How to detect vertical and horizontal lines in an image with HoughLines with OpenCV?

寵の児 提交于 2019-12-17 23:23:36
问题 I m trying to obtain a threshold of the calibration chessboard. I cant detect directly the chessboard corners as there is some dust as i observe a micro chessboard. I try several methods and HoughLinesP seems to be the easiest approach. But the results are not good, how to improve my results? import numpy as np import cv2 img = cv2.imread('lines.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,50,150,apertureSize = 3) print img.shape[1] print img.shape minLineLength

How to correctly calibrate my camera with a wide angle lens using openCV?

雨燕双飞 提交于 2019-12-17 21:32:33
问题 I am trying to calibrate a camera with a fisheye lens. I therefor used the fisheye lens module, but keep getting strange results no matter what distortion parameters I fix. This is the input image I use: https://i.imgur.com/apBuAwF.png where the red circles indicate the corners I use to calibrate my camera. This is the best I could get, output: https://imgur.com/a/XeXk5 I currently don't know by heart what the camera sensor dimensions are, but based on the focal length in pixels that is being

Camera pose estimation (OpenCV PnP)

扶醉桌前 提交于 2019-12-17 15:37:12
问题 I am trying to get a global pose estimate from an image of four fiducials with known global positions using my webcam. I have checked many stackexchange questions and a few papers and I cannot seem to get a a correct solution. The position numbers I do get out are repeatable but in no way linearly proportional to camera movement. FYI I am using C++ OpenCV 2.1. At this link is pictured my coordinate systems and the test data used below. % Input to solvePnP(): imagePoints = [ 481, 831; % [x, y]

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known

丶灬走出姿态 提交于 2019-12-17 03:51:18
问题 I am doing camera calibration from tsai algo. I got intrensic and extrinsic matrix, but how can I reconstruct the 3D coordinates from that inormation? 1) I can use Gaussian Elimination for find X,Y,Z,W and then points will be X/W , Y/W , Z/W as homogeneous system. 2) I can use the OpenCV documentation approach: as I know u , v , R , t , I can compute X,Y,Z . However both methods end up in different results that are not correct. What am I'm doing wrong? 回答1: If you got extrinsic parameters

Get 3D coordinates from 2D image pixel if extrinsic and intrinsic parameters are known

Deadly 提交于 2019-12-17 03:50:33
问题 I am doing camera calibration from tsai algo. I got intrensic and extrinsic matrix, but how can I reconstruct the 3D coordinates from that inormation? 1) I can use Gaussian Elimination for find X,Y,Z,W and then points will be X/W , Y/W , Z/W as homogeneous system. 2) I can use the OpenCV documentation approach: as I know u , v , R , t , I can compute X,Y,Z . However both methods end up in different results that are not correct. What am I'm doing wrong? 回答1: If you got extrinsic parameters

Creating stereoParameters class in Matlab: what coordinate system should be used for relative camera rotation parameter?

徘徊边缘 提交于 2019-12-14 03:41:42
问题 stereoParameters takes two extrinsic parameters: RotationOfCamera2 and TranslationOfCamera2 . The problem is that the documentation is a not very detailed about what RotationOfCamera2 really means, it only says: Rotation of camera 2 relative to camera 1, specified as a 3-by-3 matrix. What is the coordinate system in this case ? A rotation matrix can be specified in any coordinate system. What does it exactly mean "the coordinate system of Camera 1" ? What are its x,y,z axes ? In other words,

MATLAB - What are the units of Matlab Camera Calibration Toolbox

故事扮演 提交于 2019-12-14 03:36:00
问题 When showing the extrinsic parameters of calibration (the 3D model including the camera position and the position of the calibration checkerboards), the toolbox does not include units for the axes. It seemed logical to assume that they are in mm, but the z values displayed can not possibly be correct if they are indeed in mm. I'm assuming that there is some transformation going on, perhaps having to do with optical coordinates and units, but I can't figure it out from the documentation. Has

Rectifying images on opencv with intrinsic and extrinsic parameters already found

空扰寡人 提交于 2019-12-13 21:17:21
问题 I ran Bouguet's calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html) on Matlab and have the parameters from the calibration (intrinsic [focal lengths and principal point offsets] and extrinsic [rotation and translations of the checkerboard with respect to the camera]). Feature coordinate points of the checkerboard on my images are also known. I want to obtain rectified images so that I can make a disparity map (for which I have the code for) from each pair