OpenCV get topdown view of planar pattern by using intrinsic and extrinsic from cameraCalibrate

后端 未结 1 1507
故里飘歌
故里飘歌 2021-01-28 19:31

Originally I have a image with a perfect circle grid, denoted as A I add some lens distortion and perspective transformation to it, and it becomes

1条回答
  •  再見小時候
    2021-01-28 20:26

    Added

    OK, guys, simple mistake. I previously used warpPerspective to warp images instead of restoring. Since it works that way, I didn't read the doc thoroughly. It turns out that if it is for restoring, the flag WARP_INVERSE_MAP should be set. Change the function call to this, and that's it.

    warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000), WARP_INVERSE_MAP);
    

    Here is the new result image C:

    The only thing concerns me now is the intermediary tempImgC, which is the image after undistort and before warpPerspective. In some tests with different artificial B s, This image could turn out to be a scaled-up version of B with distortion removed. That means a lot of information is lost in the outter area. And there is not much to use for warpPerspective. I'm thinking maybe to scale down the image in undistort and to scale it up in warpPerspective. But I'm not sure yet how to calculate the correct scale to preserve all the information in B.

    Added 2

    The last piece of the puzzle is in place. Call getOptimalNewCameraMatrix before undistort to generate the new camera matrix that preserves all the info in B. And pass this new camera matrix to undistort and warpPerspective.

    Mat newIntrinsic=getOptimalNewCameraMatrix(intrinsic, distCoeffs, Size(2500, 2000), 1);
    undistort(imgB, tempImgC, intrinsic, distCoeffs, newIntrinsic);
    Mat matPerspective = newIntrinsic*transRot3x3;
    warpPerspective(tempImgC, imgC, matPerspective, Size(2500, 2000), WARP_INVERSE_MAP);
    

    The result image C is the same in this case. But there is a big difference for other cases. For example, with another distorted image B1. The result image C1 without new camera matrix looks like this. And the result image C1 with new camera matrix maintains the information in B1

    Added 3

    I realized that as each frame captured by camera needs processing and efficiency is important, I can't afford to use undistort and warpPerspective for each frame. It's only reasonable to have one map and use remap for each frame.

    Actually, there is a straight forward way to that, which is projectPoints. Since it generates the map from destination image to source image directly, no intermediary image is needed and thus loss of information is avoided.

    // ....
    //solve calibration
    
    //generate a 3-channel mat with each entry containing it's own coordinates
    Mat xyz(2000, 2500, CV_32FC3);
    float *pxyz = (float*)xyz.data;
    for (int y = 0; y < 2000; y++)
        for (int x = 0; x < 2500; x++)
        {
            *pxyz++ = x;
            *pxyz++ = y;
            *pxyz++ = 0;
        }
    
    // project coordinates of destination image,
    // which generates the map from destination image to source image directly
    xyz=xyz.reshape(0, 5000000);
    Mat mapToSrc(5000000, 1, CV_32FC2);
    projectPoints(xyz, rvecs[0], tvecs[0], intrinsic, distCoeffs, mapToSrc);
    Mat maps[2];
    mapToSrc = mapToSrc.reshape(0, 2000);
    split(mapToSrc, maps);
    
    //apply map
    remap(imgB, imgC, maps[0], maps[1], INTER_LINEAR); 
    

    0 讨论(0)
提交回复
热议问题