camera-calibration

OpenCV calibrateCamera() Assertion failed

荒凉一梦 提交于 2019-12-03 21:40:57
I have been trying to calibrate my camera quite for a while using Opencv calibrateCamera() function. I have followed the same procedure as described in opencv sample program. I am trying to first load 10 9 x 6 chessboard images. Then finding chessboard corners. If corners are found then corners' pixel location is stored in vector< vector < Point2f>> ImagePoints. After doing this for all images, runCalibrationAndSave part is executed. In runCalibrationAndSave , first runCalibration part is executed where ObjectPoints (of type vector< vector < Point3f>>) are filled with corners' real coordinate

Find distorted rectangle in image (OpenCV)

≯℡__Kan透↙ 提交于 2019-12-03 17:39:43
问题 I am looking for the right set of algorithms to solve this image processing problem: I have a distorted binary image containing a distorted rectangle I need to find a good approximation of the 4 corner points of this rectangle I can calculate the contour using OpenCV, but as the image is distorted it will often contain more than 4 corner points. Is there a good approximation algorithm (preferably using OpenCV operations) to find the rectangle corner points using the binary image or the

Calibrate camera with opencv, how does it work and how do i have to move my chessboard

百般思念 提交于 2019-12-03 15:20:46
I'm using openCV the calibrateCamera function to calibrate my camera. I started from the tutorial implementation , but there seems something wrong. The camera is looking down on a table and i use a chessboard with an area that covers about 1/2 or 1/4 of my total image. Since I aim to track a flat object that slides over this table, I also slide my chessboard over this table. So my first question is: is it ok that i move my chessboard over this table? Or do I have to make some 3D movements in order to get some good result? Because I was wondering: how does the function guesses the distance

Camera homography

眉间皱痕 提交于 2019-12-03 15:13:53
I am learning camera matrix stuff. I already known that I can get the homography of the camera (3*3 matrix) by using four points in a plane in object space. I want to know if we can get the homagraphy with four points not in a plane? If yes, how can I get the matrix? What formulas should I look at? I also confused homography with another concept: I only need to know three points if I want to convert from points from one coordinate to another coordinate system. So why we need four points in computing homography? Homography maps points 1. On plane to points at another plane 2. Projections of

Extrinsic Calibration With cv::SolvePnP

你说的曾经没有我的故事 提交于 2019-12-03 15:05:30
I'm currently trying to implement an alternate method to webcam-based AR using an external tracking system. I have everything in my environment configured save for the extrinsic calibration. I decided to use cv::solvePnP() as it supposedly does pretty much exactly I want, but after two weeks I am pulling my hair out trying to get it to work. A diagram below shows my configuration. c1 is my camera, c2 is the optical tracker I'm using, M is the tracked marker attached to the camera, and ch is the checkerboard. As it stands I pass in my image pixel coordinates acquired with cv:

Meaning of the retval return value in cv2.CalibrateCamera

陌路散爱 提交于 2019-12-03 14:12:59
问题 as the title says, my question is about a return value given by the calibrateCamera function from OpenCv. http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html I have a functionnal implementation in python to find the intrinsic parameters and the distorsion coefficients of a Camera using a Black&White grid. The question is more about the retval returned by the function. If i understood correctly it is the "the average re-projection error. This number gives a

how can i get the camera projection matrix out of calibrateCamera() return values

99封情书 提交于 2019-12-03 12:42:03
I am trying to get a 3x4 camera matrix for triangulation process but calibrateCamera() returns only 3x3 and 4x1 matrices. How can i get the 3x4 out of those matrices? Thanks in advance!! calibrateCamera() returns you a 3x3 matrix as cameraMatrix, a 4x1 matrix as distCoeffs, and rvecs and tvecs that are vectors of 3x1 rotation(R) and 3x1 transformation(t) matrices. What you want is ProjectionMatrix, which is multiply [cameraMatrix] by [R|t]. Therefore, it returs you a 3x4 ProjectionMatrix. You can read OpenCV documentation for more info. If you are using cameraCalibrate(), you must be getting

Calibration of images to obtain a top-view for points that lie on a same plane

半腔热情 提交于 2019-12-03 12:28:17
Calibration: I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the cameraParams which contains: Camera Extrinsics RotationMatrices: [3x3x18 double] TranslationVectors: [18x3 double] and Camera Intrinsics IntrinsicMatrix: [3x3 double] FocalLength: [1.0446e+03 1.0428e+03] PrincipalPoint: [604.1474 359.7477] Skew: 3.5436 Aim: I have recorded trajectories of some objects in motion using this camera. Each object corresponds to a single point in a frame. Now, I want to project the points such that I get a top-view. Note all

Fisheye/Wide-Angle lens Calibration in OpenCV

▼魔方 西西 提交于 2019-12-03 12:18:27
问题 I know the default OpenCV Calibration systems model a Pinhole camera, but I'm working with a system using extremely wide FOV lens (187-degrees). If there any existing way to do this in OpenCV, or to work with just wide lenses? Or will I have to rewrite all the calibration/undistort for my system? 回答1: Seems there's no good OpenCV way to do this. I wound up using OCamLib to do the actual calibration, then writing my own "undistortPoints" function (using Scaramuzza's algorithms) to undistort 2D

Camera Calibration: How to do it right

别说谁变了你拦得住时间么 提交于 2019-12-03 10:08:50
问题 I am trying to calibrate a camera using a checkerboard by the well known Zhang's method followed by bundle adjustment, which is available in both Matlab and OpenCV. There are a lot of empirical guidelines but from my personal experience the accuracy is pretty random. It could sometimes be really good but also sometimes really bad. The result actually can vary quite a bit just by simply placing the checkerboard at different locations. Suppose the target camera is rectilinear with 110 degree