Kinect: Converting from RGB Coordinates to Depth Coordinates

£可爱£侵袭症+ 提交于 2019-11-29 00:43:57

Thanks to commenter horristic, I got a link to msdn with some useful information (thanks also to T. Chen over at msdn for helping out with the API). Extracted from T. Chen's post, here's the code that will perform the mapping from RGB to depth coordinate space:

INuiCoordinateMapper* pMapper;

mNuiSensor->NuiGetCoordinateMapper(&pMapper);

pMapper->MapColorFrameToDepthFrame(
        NUI_IMAGE_TYPE_COLOR,
        NUI_IMAGE_RESOLUTION_640x480,
        NUI_IMAGE_RESOLUTION_640x480,
        640 * 480, 
        (NUI_DEPTH_IMAGE_PIXEL*)LockedRect.pBits,
        640 * 480, 
        depthPoints);

Note: the sensor needs to be initialized and a depth frame locked for this to work.

The transformed coordinates can, e.g., be queried as follows:

/// transform RGB coordinate point to a depth coordinate point 
cv::Point TransformRGBtoDepthCoords(cv::Point rgb_coords, NUI_DEPTH_IMAGE_POINT *    depthPoints)
{
    long index = rgb_coords.y * 640 + rgb_coords.x;
    NUI_DEPTH_IMAGE_POINT depthPointAtIndex = depthPoints[index];
    return cv::Point(depthPointAtIndex.x, depthPointAtIndex.y); 
}

As far as I can tell, MapColorFrameToDepthFrame effectively runs the co-ordinate system conversion on every pixel of your RGB image, storing the depth image coordinates resulting from the conversion and the resultant depth value in the output NUI_DEPTH_IMAGE_POINT array. The definition of that structure is here: http://msdn.microsoft.com/en-us/library/nuiimagecamera.nui_depth_image_point.aspx

Possibly this is overkill for your needs however, and I've no idea how fast that method is. XBOX Kinect developers have a very fast implementation of that function that runs on the GPU at frame rate, Windows developers might not be quite so lucky!

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!