问题
I am using kinect recently to find distance of some markers, so i'm stuck in converting kinect rgb and depth images that are in pixel, to real world coordinate xyz that a want in meters.
回答1:
You can use the depthToPointCloud function in the Computer Vision System Toolbox for MATLAB.
回答2:
Please, note that in Kinect SDK 1.8 (Kinect 1), it's not possible to convert from RGB image space to world space: only from depth image space to world space. Other possible conversions are:
- Depth -> RGB
- World -> Depth
- World -> RGB
So, to convert, you use the coordinate mapper included in the SDK (I'm assuming you're using Microsoft SDK and not OpenNI, AS3NUI or EuphoriaNI). Here is a sample on how to convert from world space to RGB space, taken from here:
_sensor.CoordinateMapper.MapCameraPointToColorSpace(worldCoordinate);
This sample is in C# for Kinect SDK 2.0. To see another sample for SDK 1.8, as well as a short discussion on the use of the coordinate mapper, you can see this article: Understanding Kinect Coordinate Mapping.
To convert from RGB image space to world coordinate space (only with Kinect 2 and SDK 2.0), you can use this method:
_sensor.CoordinateMapper.MapColorFrameToCameraSpace(depthFrame, resultArray);
You have to pass the entire depth frame (not color frame!!) and an array where it will return the world coordinates of every color frame pixel. This array, of course, must be large enough to contain all points (1920 * 1080 = 2 073 600 entries at max resolution), then you find the coordinate of a point using a simple formula:
worldCoordinate = resultArray[imageCoordinate.Y * rgbImageWidth + imageCoordinate.X];
来源:https://stackoverflow.com/questions/29521468/how-to-convert-kinect-rgb-and-depth-images-to-real-world-coordinate-xyz