OpenCV rotation (Rodrigues) and translation vectors for positioning 3D object in Unity3D

前端 未结 1 1354
一个人的身影
一个人的身影 2021-01-02 23:23

I\'m using \"OpenCV for Unity3d\" asset (it\'s the same OpenCV package for Java but translated to C# for Unity3d) in order to create an Augmented Reality application for my

相关标签:
1条回答
  • 2021-01-02 23:54

    You have your rotation 3x3 matrix right after cv::solvePnP. That matrix, since it is a rotation, is both orthogonal and normalized. Thus, columns of that matrix are in order from left to right :

    1. Right vector (on X axis);
    2. Up vector (on Y axis);
    3. Forward vector (on Z axis).

    OpenCV uses a right-handed coordinates system. Sitting on camera looking along optical axis, X axis goes right, Y axis goes downward and Z axis goes forward.

    You pass forward vector F = (fx, fy, fz) and up vector U = (ux, uy, uz) to Unity. These are the third and second columns respectively. No need to normalize; they are normalized already.

    In Unity, you build your quaternion like this:

    Vector3 f; // from OpenCV
    Vector3 u; // from OpenCV
    
    // notice that Y coordinates here are inverted to pass from OpenCV right-handed coordinates system to Unity left-handed one
    Quaternion rot = Quaternion.LookRotation(new Vector3(f.x, -f.y, f.z), new Vector3(u.x, -u.y, u.z));
    

    And that is pretty much it. Hope this helps!

    EDITED FOR POSITION RELATED COMMENTS

    NOTE : Z axis in OpenCV is on camera's optical axis which goes through image near center but not exactly at center in general. Among your calibration parameters, there are Cx and Cy parameters. These combined are the 2D offset in image space from center to where the Z axis goes through image. That shift must be taken into account to map exactly 3D stuff over 2D background.

    To get proper positioning in Unity:

    // STEP 1 : fetch position from OpenCV + basic transformation
    Vector3 pos; // from OpenCV
    pos = new Vector3(pos.x, -pos.y, pos.z); // right-handed coordinates system (OpenCV) to left-handed one (Unity)
    
    // STEP 2 : set virtual camera's frustrum (Unity) to match physical camera's parameters
    Vector2 fparams; // from OpenCV (calibration parameters Fx and Fy = focal lengths in pixels)
    Vector2 resolution; // image resolution from OpenCV
    float vfov =  2.0f * Mathf.Atan(0.5f * resolution.y / fparams.y) * Mathf.Rad2Deg; // virtual camera (pinhole type) vertical field of view
    Camera cam; // TODO get reference one way or another
    cam.fieldOfView = vfov;
    cam.aspect = resolution.x / resolution.y; // you could set a viewport rect with proper aspect as well... I would prefer the viewport approach
    
    // STEP 3 : shift position to compensate for physical camera's optical axis not going exactly through image center
    Vector2 cparams; // from OpenCV (calibration parameters Cx and Cy = optical center shifts from image center in pixels)
    Vector3 imageCenter = new Vector3(0.5f, 0.5f, pos.z); // in viewport coordinates
    Vector3 opticalCenter = new Vector3(0.5f + cparams.x / resolution.x, 0.5f + cparams.y / resolution.y, pos.z); // in viewport coordinates
    pos += cam.ViewportToWorldPoint(imageCenter) - cam.ViewportToWorldPoint(opticalCenter); // position is set as if physical camera's optical axis went exactly through image center
    

    You put images retrieved from physical camera right in front of virtual camera centered on its forward axis (scaled to fit frustrum) then you have proper 3D positions mapped over 2D background!

    0 讨论(0)
提交回复
热议问题