问题
my semester project is to Calibrate Stereo Cameras with a big baseline (~2m). so my approach is to run without exact defined calibration pattern like the chessboard cause it had to be huge and would be hard to handle.
my problem is similar to this: 3d reconstruction from 2 images without info about the camera
Program till now:
- Corner detection left image
goodFeaturesToTrack
- refined corners
cornerSubPix
- Find corner locations in right image
calcOpticalFlowPyrLK
- calculate fundamental matrix F
findFundamentalMat
- calculate H1, H2 rectification homography matrix
stereoRectifyUncalibrated
- Rectify images
warpPerspective
- Calculate Disparity map
sgbm
so far so good it works passably but rectified images are "jumping" in perspective if i change the number of corners..
don't know if this if form imprecision or mistakes i mad or if it cant be calculated due to no known camera parameters or no lens distortion compensation (but also happens on Tsukuba pics..) suggestions are welcome :)
but not my main problem, now i want to reconstruct the 3D points.
but reprojectImageTo3D
needs the Q matrix which i don't have so far. so my question is how to calculate it? i have the baseline, distance between the two cameras. My feeling says if i convert des disparity map in to a 3d point cloud the only thing im missing is the scale right? so if i set in the baseline i got the 3d reconstruction right? then how to?
im also planing to compensate lens distortion as the first step for each camera separately with a chessboard (small and close to one camera at a time so i haven't to be 10-15m away with a big pattern in the overlapping area of both..) so if this is helping i could also use the camera parameters..
is there a documentation besides the http://docs.opencv.org? that i can see and understand what and how the Q matrix is calculated or can i open the source code (probably hard to understand for me ^^) if i press F2 in Qt i only see the function with the transfer parameter types.. (sorry im really new to all of this )
- left: input with found corners
- top h1, h2: rectify images (looks good with this corner count ^^)
- SGBM: Disparity map
回答1:
so i found out what the Q matrix constrains here: Using OpenCV to generate 3d points (assuming frontal parallel configuration)
all these parameters are given by the single camera calibration: c_x , c_y , f
and the baseline is what i have measured: T_x
so this works for now, only the units are not that clear to me, i have used them form single camera calib which are in px and set the baseline in meters, divided the disparity map by 16, but it seams not the right scale..
by the way the disparity map above was wrong ^^ and now it looks better. you have to do a anti Shearing Transform cause the stereoRectifyUncalibrated is Shearing your image (not documented?). described in this paper at "7 Shearing Transform" by Charles Loop Zhengyou Zhang: http://research.microsoft.com/en-us/um/people/Zhang/Papers/TR99-21.pdf
Result: http://i.stack.imgur.com/UkuJi.jpg
来源:https://stackoverflow.com/questions/24852151/3d-reconstruction-from-2-images-with-baseline-and-single-camera-calibration