问题
EDIT:
What I have: camera intrinsics, extrinsic from calibration, 2D image and depth map
What I need: 2D virtual view image
I am trying to generate a novel view(right view) for Depth Image Based Rendering. Reason for this is that only the left image and depth map are available at the receiver which has to reconstruct the right view (see image).
I want to know if these steps will give me the desired result or what I should be doing instead,
First, by using Camera Calibration toolbox for MATLAB by CalTech, the intrinsic, extrinsic matrices can be obtained.
Then, they can be mapped to 3D world points using calibration parameters by this method "http://nicolas.burrus.name/index.php/Research/KinectCalibration#tocLink2"
Now, I want to back project this to a new image plane(right view). The right view is simply a translation of left and no rotation because of the setup. How do I do this reconstruction?
Also, Can I estimate R and T from MATLAB stereo calibration tool and transform every point in original left view to right view using P2 = R*P1+T, P1 and P2 are image points of 3D world point P in the respective planes.
Any idea and help are highly appreciated, would rephrase/add details if the question is not clear.
回答1:
(Theoretic answer*)
You have to define what R and T means. If I understand, is the Roto-translation of your (main) left camera. If you can map a point P (like your P1 or P2) in 3D space, the correspondance with a point m (I not call it p to avoid confusion) in your left camera is (unless you use a different convention (pseudocode)
m = K[R|t]*P
which
P1 = (X,Y,Z,1)
m = (u',v',w)
but you want 2D coordinates so the coordinates in your left camera are:
u = u'/w
v = v'/w
if you already roto-translated P1 into P2 (not very useful) is equal to (pseudocode)
1 0 0 0
m = K[I|0]*P = K*[0 1 0 0] * P2
0 0 1 0
Assume this is the theoretical relationship with a 3D point P with his 2D point in an image m, you may think to have you right camera in a different position. If there is only translation with respect to left camera, the right camera is translated of T2 with respect to the left camera and roto-translated of R/T+T2 with respect to the centre of the world. So the m' proiected point in your right camera should be (assuming that the cameras are equal means the have the same intrinsics K)
m' = K[R|T+T2]*P = K[I|T2]*P2 I is the identity matrix.
If you want to transform m directly into m' withot using 3D points you have to implement epipolar geometry.
- If cameras are different with different K, if the calibration of R and T has not the same standard of calibration of K, this equation may not work. If calibration is not well-done, it could work but with errors.
来源:https://stackoverflow.com/questions/39447624/back-projecting-3d-world-point-to-new-view-image-plane