问题
I'm using "OpenCV for Unity3d" asset (it's the same OpenCV package for Java but translated to C# for Unity3d) in order to create an Augmented Reality application for my MSc Thesis (Computer Science).
So far, I'm able to detect an object from video frames using ORB feature detector and also I can find the 3D-to-2D relation using OpenCV's SolvePnP method (I did the camera calibration as well). From that method I'm getting the Translation and Rotation vectors. The problem occurs at the augmentation stage where I have to show a 3d object as a virtual object and update its position and rotation at each frame. OpenCV returns Rodrigues Rotation matrix, but Unity3d works with Quaternion rotation so I'm updating object's position and rotation wrong and I can't figure it out how to implement the conversion forumla (from Rodrigues to Quaternion).
Getting the rvec and tvec:
Mat rvec = new Mat();
Mat tvec = new Mat();
Mat rotationMatrix = new Mat ();
Calib3d.solvePnP (object_world_corners, scene_flat_corners, CalibrationMatrix, DistortionCoefficientsMatrix, rvec, tvec);
Calib3d.Rodrigues (rvec, rotationMatrix);
Updating the position of the virtual object:
Vector3 objPosition = new Vector3 ();
objPosition.x = (model.transform.position.x + (float)tvec.get (0, 0)[0]);
objPosition.y = (model.transform.position.y + (float)tvec.get (1, 0)[0]);
objPosition.z = (model.transform.position.z - (float)tvec.get (2, 0)[0]);
model.transform.position = objPosition;
I have a minus sign for the Z axis because when you convert OpenCV's to Unty3d's system coordinate you must invert the Z axis (I checked the system coordinates by myself).
Unity3d's Coordinate System (Green is Y, Red is X and Blue is Z) :
OpenCV's Coordinate System:
In addition I did the same thing for the rotation matrix and I updated the virtual object's rotation.
p.s I found a similar question but the guy who asked for it he did not post clearly the solution.
Thanks!
回答1:
You have your rotation 3x3 matrix right after cv::solvePnP. That matrix, since it is a rotation, is both orthogonal and normalized. Thus, columns of that matrix are in order from left to right :
- Right vector (on X axis);
- Up vector (on Y axis);
- Forward vector (on Z axis).
OpenCV uses a right-handed coordinates system. Sitting on camera looking along optical axis, X axis goes right, Y axis goes downward and Z axis goes forward.
You pass forward vector F = (fx, fy, fz) and up vector U = (ux, uy, uz) to Unity. These are the third and second columns respectively. No need to normalize; they are normalized already.
In Unity, you build your quaternion like this:
Vector3 f; // from OpenCV
Vector3 u; // from OpenCV
// notice that Y coordinates here are inverted to pass from OpenCV right-handed coordinates system to Unity left-handed one
Quaternion rot = Quaternion.LookRotation(new Vector3(f.x, -f.y, f.z), new Vector3(u.x, -u.y, u.z));
And that is pretty much it. Hope this helps!
EDITED FOR POSITION RELATED COMMENTS
NOTE : Z axis in OpenCV is on camera's optical axis which goes through image near center but not exactly at center in general. Among your calibration parameters, there are Cx and Cy parameters. These combined are the 2D offset in image space from center to where the Z axis goes through image. That shift must be taken into account to map exactly 3D stuff over 2D background.
To get proper positioning in Unity:
// STEP 1 : fetch position from OpenCV + basic transformation
Vector3 pos; // from OpenCV
pos = new Vector3(pos.x, -pos.y, pos.z); // right-handed coordinates system (OpenCV) to left-handed one (Unity)
// STEP 2 : set virtual camera's frustrum (Unity) to match physical camera's parameters
Vector2 fparams; // from OpenCV (calibration parameters Fx and Fy = focal lengths in pixels)
Vector2 resolution; // image resolution from OpenCV
float vfov = 2.0f * Mathf.Atan(0.5f * resolution.y / fparams.y) * Mathf.Rad2Deg; // virtual camera (pinhole type) vertical field of view
Camera cam; // TODO get reference one way or another
cam.fieldOfView = vfov;
cam.aspect = resolution.x / resolution.y; // you could set a viewport rect with proper aspect as well... I would prefer the viewport approach
// STEP 3 : shift position to compensate for physical camera's optical axis not going exactly through image center
Vector2 cparams; // from OpenCV (calibration parameters Cx and Cy = optical center shifts from image center in pixels)
Vector3 imageCenter = new Vector3(0.5f, 0.5f, pos.z); // in viewport coordinates
Vector3 opticalCenter = new Vector3(0.5f + cparams.x / resolution.x, 0.5f + cparams.y / resolution.y, pos.z); // in viewport coordinates
pos += cam.ViewportToWorldPoint(imageCenter) - cam.ViewportToWorldPoint(opticalCenter); // position is set as if physical camera's optical axis went exactly through image center
You put images retrieved from physical camera right in front of virtual camera centered on its forward axis (scaled to fit frustrum) then you have proper 3D positions mapped over 2D background!
来源:https://stackoverflow.com/questions/36561593/opencv-rotation-rodrigues-and-translation-vectors-for-positioning-3d-object-in