OpenCV StereoRectifyUncalibrated to 3D Point Cloud

僤鯓⒐⒋嵵緔 提交于 2019-12-11 05:21:45

问题


I have some code that works out all of the parts up to calculating values with cv::stereoRectifyUncalibrated. However, I am not sure where to go from there to get a 3D Point cloud from it.

I have code that works with the calibrated version that gives me a Q matrix and I then use that with reprojectImageTo3D and StereoBM to give me a point cloud.

I want to compare the results of the two different methods as sometimes I will not be able to calibrate the camera.


回答1:


http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html I think this will be helpful. It has a code which converts Disparity Image to Point cloud using PCL and shows in 3D viewer.

Since you have Q, two images and other camera params(from calibration), you should use ReprojectTo3D to get depth map.

Use StereoBM or stereoSGBM to get Disparity Map and use that Disparit Map and Q to get depth image.

StereoBM sbm;
sbm.state->SADWindowSize = 9;
sbm.state->numberOfDisparities = 112;
sbm.state->preFilterSize = 5;
sbm.state->preFilterCap = 61;
sbm.state->minDisparity = -39;
sbm.state->textureThreshold = 507;
sbm.state->uniquenessRatio = 0;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 8;
sbm.state->disp12MaxDiff = 1;

sbm(g1, g2, disp); // g1 and g2 are two gray images
reprojectImageTo3D(disp, Image3D, Q, true, CV_32F);

And that code basically converts depth map to Point cloud.

 pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>);
 double px, py, pz;
 uchar pr, pg, pb;

 for (int i = 0; i < img_rgb.rows; i++)
 {
     uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
     uchar* disp_ptr = img_disparity.ptr<uchar>(i);
     double* recons_ptr = recons3D.ptr<double>(i);
     for (int j = 0; j < img_rgb.cols; j++)
     {
         //Get 3D coordinates

          uchar d = disp_ptr[j];
          if ( d == 0 ) continue; //Discard bad pixels
          double pw = -1.0 * static_cast<double>(d) * Q32 + Q33; 
          px = static_cast<double>(j) + Q03;
          py = static_cast<double>(i) + Q13;
          pz = Q23;

          // Normalize the points
          px = px/pw;
          py = py/pw;
          pz = pz/pw;

          //Get RGB info
          pb = rgb_ptr[3*j];
          pg = rgb_ptr[3*j+1];
          pr = rgb_ptr[3*j+2];

          //Insert info into point cloud structure
          pcl::PointXYZRGB point;
          point.x = px;
          point.y = py;
          point.z = pz;
          uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
          static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
          point.rgb = *reinterpret_cast<float*>(&rgb);
          point_cloud_ptr->points.push_back (point);
    }
}

point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
point_cloud_ptr->height = 1;

//Create visualizer
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
viewer = createVisualizer( point_cloud_ptr );


来源:https://stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!