How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)

折月煮酒 提交于 2019-11-29 04:51:55

To improve the answer of antarctician, to display the image in 3D you need to create your point cloud first... The RGB and Depth images give you the necessary data to create an organized colored pointcloud. To do so, you need to calculate the x,y,z values for each point. The z value comes from the depth pixel, but the x and y must be calculated.

to do it you can do something like this:

void Viewer::get_pcl(cv::Mat& color_mat, cv::Mat& depth_mat, pcl::PointCloud<pcl::PointXYZRGBA>& cloud ){
    float x,y,z;

    for (int j = 0; j< depth_mat.rows; j ++){
        for(int i = 0; i < depth_mat.cols; i++){
            // the RGB data is created
            PCD_BGRA   pcd_BGRA;
                       pcd_BGRA.B  = color_mat.at<cv::Vec3b>(j,i)[0];
                       pcd_BGRA.R  = color_mat.at<cv::Vec3b>(j,i)[2];
                       pcd_BGRA.G  = color_mat.at<cv::Vec3b>(j,i)[1];
                       pcd_BGRA.A  = 0;

            pcl::PointXYZRGBA vertex;
            int depth_value = (int) depth_mat.at<unsigned short>(j,i);
            // find the world coordinates
            openni::CoordinateConverter::convertDepthToWorld(depth, i, j, (openni::DepthPixel) depth_mat.at<unsigned short>(j,i), &x, &y,&z );

            // the point is created with depth and color data
            if ( limitx_min <= i && limitx_max >=i && limity_min <= j && limity_max >= j && depth_value != 0 && depth_value <= limitz_max && depth_value >= limitz_min){
                vertex.x   = (float) x;
                vertex.y   = (float) y;
                vertex.z   = (float) depth_value;
            } else {
                // if the data is outside the boundaries
                vertex.x   = bad_point;
                vertex.y   = bad_point;
                vertex.z   = bad_point;
            }
            vertex.rgb = pcd_BGRA.RGB_float;

            // the point is pushed back in the cloud
            cloud.points.push_back( vertex );
        }
    }
}

and PCD_BGRA is

union PCD_BGRA
{
    struct
    {
        uchar B; // LSB
        uchar G; // ---
        uchar R; // MSB
        uchar A; //
    };
    float RGB_float;
    uint  RGB_uint;
};

Of course, this is for the case you want to use PCL, but it is more or less the calculations of the x,y,z values stands. This relies on openni::CoordinateConverter::convertDepthToWorld to find the position of the point in 3D. You may also do this manually

 const float invfocalLength = 1.f / 525.f;
 const float centerX = 319.5f;
 const float centerY = 239.5f;
 const float factor = 1.f / 1000.f;

 float dist = factor * (float)(*depthdata);
 p.x = (x-centerX) * dist * invfocalLength;
 p.y = (y-centerY) * dist * invfocalLength;
 p.z = dist;

Where centerX, centerY, and focallength are the intrinsic calibration of the camera (this one is for Kinect). and the factor it is if you need the distance in meters or millimeters... this value depends on your program

For the questions:

  1. Yes, you can display it using the latest OpenCV with the viz class or with another external library that suits your needs.
  2. OpenGl is nice, but PCL (or OpenCV) is easier to use if you do not know how to use any of them (I mean for displaying pointclouds)
  3. I haven't use it with windows, but in theory it can be used with visual studio 2012. As far as I know the version that PCL comes packed with is OpenNI 1, and it won't affect OpenNI2...

I haven't done this with OpenNI and OpenCV but I hope I can help you. So first of all to answer your first two questions:

  1. Probably yes, as far as I understood you want to visualize a 3D point cloud. OpenCV is only an image processing library and you would need a 3D rendering library to do what you want.
  2. I worked with OpenSceneGraph and would recommend it. However you can also use OpenGL or Direct X.

If you only want to visualize a point cloud such as the "3D View" of the Kinect Studio, you wouldn't need PCL as it would be too much for this simple job.

The basic idea of doing this task is to create 3D quads as the same number of pixels you have on your images. For example if you have a 640x480 resolution, you would need 640*480 quads. Each quad would have the color of the corresponding pixel depending on the pixel values from the color image. You would then move these quads back and forth on the Z axis, depending on the values from the depth image. This can be done with modern OpenGL or if you feel more comfortable with C++, OpenSceneGraph(which is also based on OpenGL).

You would have to be careful about two things:

  1. Drawing so many quads can be slow even on a modern computer. You would need to read about "instanced rendering" to render a large number of instances of an object(in our case a quad) in a single GPU draw call. This can be done using the vertex shader.
  2. Since the RGB and the Depth camera of the Kinect have different physical locations, you would need to calibrate both of them. There are functions to do this in the official Kinect SDK, however I don't know about OpenNI.

If you decide to do this with OpenGL, I would suggest reading about the GPU pipeline if you aren't familiar with it. This would help you to save a lot of time when working with the vertex shaders.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!