camera-calibration

In a calibrated stereo-vision rig, how does one obtain the “camera matrices” needed for implementing a 3D triangulation algorithm?

六月ゝ 毕业季﹏ 提交于 2019-11-30 07:32:26
I am trying to implement the (relatively simple) linear homogeneous (DLT) 3D triangulation method from Hartley & Zisserman's "Multiple View Geometry" (sec 12.2), with the aim of implementing their full, "optimal algorithm" in the future. Right now, based on this question , I'm trying to get it to work in Matlab, and will later port it into C++ and OpenCV, testing for conformity along the way. The problem is that I'm unsure how to use the data I have. I have calibrated my stereo rig, and obtained the two intrinsic camera matrices, two vectors of distortion coefficients, the rotation matrix and

findChessboardCorners fails for calibration image

蓝咒 提交于 2019-11-30 06:53:29
I am trying to get OpenCV 2.4.5 to recognize a checkerboard pattern from my webcam. I couldn't get that working, so I decided to try to get it working just using a "perfect" image: but it still won't work--patternFound returns false every time. Does anyone have any idea what I'm doing wrong? #include <stdio.h> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/calib3d/calib3d.hpp> #include <opencv2/highgui/highgui.hpp> using namespace cv; using namespace std; int main(){ Size patternsize(8,8); //number of centers Mat frame = imread("perfect.png"); /

Determine extrinsic camera with opencv to opengl with world space object

风流意气都作罢 提交于 2019-11-30 02:28:50
I'm using opencv and openframeworks (ie. opengl) to calculate a camera (world transform and projection matrixes) from an image (and later, several images for triangulation). For the purposes of opencv, the "floor plan" becomes the object (ie. the chessboard) with 0,0,0 the center of the world. The world/floor positions are known so I need to get the projection information (distortion coefficients, fov etc) and the extrinsic coordinates of the camera. I have mapped the view-positions of these floor-plan points onto my 2D image in normalised view-space ([0,0] is top-left. [1,1] is bottom right).

Undistorting/rectify images with OpenCV

不羁岁月 提交于 2019-11-30 00:08:56
问题 I took the example of code for calibrating a camera and undistorting images from this book: shop.oreilly.com/product/9780596516130.do As far as I understood the usual camera calibration methods of OpenCV work perfectly for "normal" cameras. When it comes to Fisheye-Lenses though we have to use a vector of 8 calibration parameters instead of 5 and also the flag CV_CALIB_RATIONAL_MODEL in the method cvCalibrateCamera2 . At least, that's what it says in the OpenCV documentary So, when I use this

Why does the focal length in the camera intrinsics matrix have two dimensions?

我的梦境 提交于 2019-11-29 23:06:31
In the pinhole camera model there is only one focal length which is between the principal point and the camera center. However, after calculating the camera's intrinsic parameters, the matrix contains (fx, 0, offsetx, 0, 0, fy, offsety, 0, 0, 0, 1, 0) Is this because the pixels of the image sensor are not square in x and y? Thank you. FvD In short: yes. In order to make a mathematical model that can describe a camera with rectangular pixels, you have to introduce two separate focal lengths. I'll quote from the often recommended " Learning OpenCV " (p. 373) which covers that section pretty well

Understanding of openCV undistortion

牧云@^-^@ 提交于 2019-11-29 21:16:33
问题 I'm receiving depth images of a tof camera via MATLAB . the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files. But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of

To calculate world coordinates from screen coordinates with OpenCV

為{幸葍}努か 提交于 2019-11-29 21:08:41
I have calculated the intrinsic and extrinsic parameters of the camera with OpenCV. Now, I want to calculate world coordinates (x,y,z) from screen coordinates (u,v). How I do this? N.B. as I use the kinect, I already know the z coordinate. Any help is much appreciated. Thanks! First to understand how you calculate it, it would help you if you read some things about the pinhole camera model and simple perspective projection. For a quick glimpse, check this . I'll try to update with more. So, let's start by the opposite which describes how a camera works: project a 3d point in the world

Projection matrix from Fundamental matrix

不羁的心 提交于 2019-11-29 15:39:13
问题 I have obtained fundamental matrix between two cameras. I also, have their internal parameters in a 3 X 3 matrix which I had obtained earlier through chess board. Using the fundamental matrix, I have obtained P1 and P2 by P1 = [I | 0] and P2 = [ [e']x * F | e'] These projection matrices are not really useful in getting the exact 3D location. Since, I have the internal parameters K1 and K2 , I changed P1 and P2 as P1 = K1 * [I | 0] and P2 = K2 * [ [e']x * F | e'] Is this the right way to get

How to calculate extrinsic parameters of one camera relative to the second camera?

痴心易碎 提交于 2019-11-29 10:14:24
I have calibrated 2 cameras with respect to some world coordinate system. I know rotation matrix and translation vector for each of them relative to the world frame. From these matrices how to calculate rotation matrix and translation vector of one camera with respect to the other?? Any help or suggestion please. Thanks! First convert your rotation matrix into a rotation vector. Now you have 2 3d vectors for each camera, call them A1,A2,B1,B2. You have all 4 of them with respect to some origin O. The rule you need is A relative to B = (A relative to O)- (B relative to O) Apply that rule to

OpenCV findChessboardCorners function is failing in a (apparently) simple scenario

こ雲淡風輕ζ 提交于 2019-11-29 07:33:59
I'm trying to find the corners of a chessboard using OpenCV. The image I'm using contains two chessboards, but I'm interested only in a sub-region of one of those. The following image shows the original image. Using GIMP, I've then selected the area of interest and I've set all the other pixel to a default value. I haven't actually cropped the image because I've already calibrated the camera using this image size and I didn't want to change it. The operation should be equivalent to change the values in the image matrix, but I preferred to do it with GIMP. It is a one time experiment and it is