camera-calibration

findChessboardCorners fails for calibration image

China☆狼群 提交于 2019-11-29 07:26:56
问题 I am trying to get OpenCV 2.4.5 to recognize a checkerboard pattern from my webcam. I couldn't get that working, so I decided to try to get it working just using a "perfect" image: but it still won't work--patternFound returns false every time. Does anyone have any idea what I'm doing wrong? #include <stdio.h> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/calib3d/calib3d.hpp> #include <opencv2/highgui/highgui.hpp> using namespace cv; using namespace

OpenCV fisheye calibration cuts too much of the resulting image

为君一笑 提交于 2019-11-29 06:26:48
I am using OpenCV to calibrate images taken using cameras with fish-eye lenses. The functions I am using are: findChessboardCorners(...); to find the corners of the calibration pattern. cornerSubPix(...); to refine the found corners. fisheye::calibrate(...); to calibrate the camera matrix and the distortion coefficients. fisheye::undistortImage(...); to undistort the images using the camera info obtained from calibration. While the resulting image does appear to look good (straight lines and so on), my issue is that the function cut away too much of the image. This is a real problem, as I am

In a calibrated stereo-vision rig, how does one obtain the “camera matrices” needed for implementing a 3D triangulation algorithm?

瘦欲@ 提交于 2019-11-29 03:50:31
问题 I am trying to implement the (relatively simple) linear homogeneous (DLT) 3D triangulation method from Hartley & Zisserman's "Multiple View Geometry" (sec 12.2), with the aim of implementing their full, "optimal algorithm" in the future. Right now, based on this question, I'm trying to get it to work in Matlab, and will later port it into C++ and OpenCV, testing for conformity along the way. The problem is that I'm unsure how to use the data I have. I have calibrated my stereo rig, and

How can I estimate the camera pose with 3d-to-2d-point-correspondences (using opencv)

杀马特。学长 韩版系。学妹 提交于 2019-11-29 03:12:24
问题 Hello my goal is to develop head-tracking functionality to be used in an aircraft (simulator) cockpit , in order to provide AR to suport civilian pilots to land and fly with bad visual conditions. My approach is to detect characteristic points (in the dark simulator LEDs) of which I know the 3D coordinates and than compute the estimated (head worn camera's) pose [R|t] (rotation concatinated with translation). The problem I do have is that the estimated pose seems to be always wrong and a

Determine extrinsic camera with opencv to opengl with world space object

99封情书 提交于 2019-11-28 23:25:43
问题 I'm using opencv and openframeworks (ie. opengl) to calculate a camera (world transform and projection matrixes) from an image (and later, several images for triangulation). For the purposes of opencv, the "floor plan" becomes the object (ie. the chessboard) with 0,0,0 the center of the world. The world/floor positions are known so I need to get the projection information (distortion coefficients, fov etc) and the extrinsic coordinates of the camera. I have mapped the view-positions of these

Augmented Reality OpenGL+OpenCV

陌路散爱 提交于 2019-11-28 20:57:21
I am very new to OpenCV with a limited experience on OpenGL. I am willing to overlay a 3D object on a calibrated image of a checkerboard. Any tips or guidance? nkint The basic idea is that you have 2 cameras: one is the physical one (the one where you are retriving the images with opencv) and one is the opengl one. You have to align those two matrices. To do that, you need to calibrate the physical camera. First. You need a distortion parameters (because every lens more or less has some optical distortion), and build with those parameters the so called intrinsic parameters. You do this with

How to verify that the camera calibration is correct? (or how to estimate the error of reprojection)

自闭症网瘾萝莉.ら 提交于 2019-11-28 20:39:23
问题 The quality of calibration is measured by the reprojection error (is there an alternative?), which requires a knowledge world coordinates of some 3d point(s). Is there a simple way to produce such known points? Is there a way to verify the calibration in some other way (for example, Zhang's calibration method only requires that the calibration object be planar and the geometry of the system need not to be known) 回答1: You can verify the accuracy of the estimated nonlinear lens distortion

Kinect intrinsic parameters from field of view

こ雲淡風輕ζ 提交于 2019-11-28 19:42:05
Microsoft state that the field of view angles for the Kinect are 43 degrees vertical and 57 horizontal (stated here ) . Given these, can we calculate the intrinsic parameters i.e. focal point and centre of projection? I assume centre of projection can be given as (0,0,0)? Thanks EDIT: some more information on what I'm trying to do I have a dataset of images recorded with a Kinect, I am trying to convert pixel positions (x_screen,y_screen and z_world (in mm)) to real world coordinates. If I know the camera is placed at point (x',y',z') in the real world coordinate system, is it sufficient to

OpenCV extrinsic camera from feature points

浪尽此生 提交于 2019-11-28 19:41:54
How do I retrieve the rotation matrix, the translation vector and maybe some scaling factors of each camera using OpenCV when I have pictures of an object from the view of each of these cameras? For every picture I have the image coordinates of several feature points. Not all feature points are visible in all of the pictures. I want to map the computed 3D coordinates of the feature points of the object to a slightly different object to align the shape of the second object to the first object. I heard it is possible using cv::calibrateCamera(...) but I can't get quite through it... Does someone

Python How to detect vertical and horizontal lines in an image with HoughLines with OpenCV?

送分小仙女□ 提交于 2019-11-28 19:41:16
I m trying to obtain a threshold of the calibration chessboard. I cant detect directly the chessboard corners as there is some dust as i observe a micro chessboard. I try several methods and HoughLinesP seems to be the easiest approach. But the results are not good, how to improve my results? import numpy as np import cv2 img = cv2.imread('lines.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,50,150,apertureSize = 3) print img.shape[1] print img.shape minLineLength=100 lines = cv2.HoughLinesP(image=edges,rho=0.02,theta=np.pi/500, threshold=10,lines=np.array([]),