camera-calibration

Does a smaller reprojection error always means better calibration?

百般思念 提交于 2019-12-03 03:17:58
问题 During camera calibration, the usual advice is to use many images (>10) with variations in pose, depth, etc. However I notice that usually the fewer images I use, the smaller the reprojection error. For example with 27 images, cv::calibrateCamera returns 0.23 and with just 3 I get 0.11 This may be due to the the fact that during calibration we are solving a least squares problem for an overdetermined system. QUESTIONS: Do we actually use the reprojection error as an absolute measure of how

Re-distort points with camera intrinsics/extrinsics

情到浓时终转凉″ 提交于 2019-12-02 23:53:16
Given a set of 2D points, how can I apply the opposite of undistortPoints ? I have the camera intrinsics and distCoeffs and would like to (for example) create a square, and distort it as if the camera had viewed it through the lens. I have found a 'distort' patch here : http://code.opencv.org/issues/1387 but it would seem this is only good for images, I want to work on sparse points. morotspaj This question is rather old but since I ended up here from a google search without seeing a neat answer I decided to answer it anyway. There is a function called projectPoints that does exactly this. The

3D reconstruction from two calibrated cameras - where is the error in this pipeline?

雨燕双飞 提交于 2019-12-02 19:42:18
There are many posts about 3D reconstruction from stereo views of known internal calibration, some of which are excellent . I have read a lot of them, and based on what I have read I am trying to compute my own 3D scene reconstruction with the below pipeline / algorithm. I'll set out the method then ask specific questions at the bottom. 0. Calibrate your cameras: This means retrieve the camera calibration matrices K 1 and K 2 for Camera 1 and Camera 2. These are 3x3 matrices encapsulating each camera's internal parameters: focal length, principal point offset / image centre. These don't change

create opencv camera matrix for iPhone 5 solvepnp

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-02 17:46:11
I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac: http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html For this method I need to provide a camera matrix: __ __ | fx 0 cx | | 0 fy cy | |_0 0 1 _| where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is. I checked another

What are the main references to the fish-eye camera model in OpenCV3.0.0dev?

。_饼干妹妹 提交于 2019-12-02 17:39:38
I am wrestling with the Fish-Eye Camera Model used in OpenCV 3.0.0.dev . I have read the documentation in this link several times, especially the "Detailed Description" part and formulas modeling fish-eye distortion. By now I have two concerns: Based on the projection models listed here and their conceptual explanations in " Accuracy of Fish-Eye Lens Model " By Hughes, I can't figure out which projection model has been used in the OpenCV implementation. Since the description is so concise, I need to know the main reference papers used by OpenCV developers for implementing fish-eye namespace,

Does a smaller reprojection error always means better calibration?

試著忘記壹切 提交于 2019-12-02 16:48:59
During camera calibration, the usual advice is to use many images (>10) with variations in pose, depth, etc. However I notice that usually the fewer images I use, the smaller the reprojection error. For example with 27 images, cv::calibrateCamera returns 0.23 and with just 3 I get 0.11 This may be due to the the fact that during calibration we are solving a least squares problem for an overdetermined system. QUESTIONS: Do we actually use the reprojection error as an absolute measure of how good a calibration is? For example, if I calibrate with 3 images and get 0.11, and then calibrate with 27

Improve cvFindChessboardCorners

时光怂恿深爱的人放手 提交于 2019-12-02 14:46:25
问题 Unfortunately, I was not able to find any solution to my question. What I'm trying to do is to improve the results using the OpenCV-method cvFindChessboardCorners in order to be able to achieve a better camera calibration, because I think that this is the reason why I get poor results undistorting/rectifying images like in my question before (Question: Undistorting/rectify images with OpenCV). So, what I want to know is, how I can improve the algorithm in order to be able to detect the

OpenCV get topdown view of planar pattern by using intrinsic and extrinsic from cameraCalibrate

筅森魡賤 提交于 2019-12-02 11:19:21
Originally I have a image with a perfect circle grid, denoted as A I add some lens distortion and perspective transformation to it, and it becomes B In camera calibration, A would be my destination image, and B would be my source image. Let's say I have all the circle center coordinates in both images, stored in stdPts and disPts . //25 center pts in A vector<Point2f> stdPts; for (int i = 0; i <= 4; ++i) { for (int j = 0; j <= 4; ++j) { stdPts[i * 5 + j].x = 250 + i * 500; stdPts[i * 5 + j].y = 200 + j * 400; } } //25 center pts in B vector<Point2f> disPts = FindCircleCenter(); I want to

Improve cvFindChessboardCorners

廉价感情. 提交于 2019-12-02 07:46:55
Unfortunately, I was not able to find any solution to my question. What I'm trying to do is to improve the results using the OpenCV-method cvFindChessboardCorners in order to be able to achieve a better camera calibration, because I think that this is the reason why I get poor results undistorting/rectifying images like in my question before ( Question: Undistorting/rectify images with OpenCV ). So, what I want to know is, how I can improve the algorithm in order to be able to detect the chessboard corners in all images of this link: http://abload.de/img/cvfindchessboardcorneoxs73.jpg

OpenCV OpenNI calibrate kinect

萝らか妹 提交于 2019-12-02 01:20:47
I use home to capture by kinect: capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) capture.retrieve( bgrImage, CV_CAP_OPENNI_BGR_IMAGE ) Now I don't know if I have to calibrate kinect to have depth pixel value correct. That is, if I take a pixel (u, v) from the image RBG, get the correct value of depth taking the pixels (u, v) from the image depth? depthMap.at<uchar>(u,v) Any help is much appreciated. Thanks! You can check if registration is on like so: cout << "REGISTRATION " << capture.get( CV_CAP_PROP_OPENNI_REGISTRATION ) << endl; and if it's not, set it like so: capture.set(CV_CAP