问题
I am making a program with the SDK, where when users are detected, The program draws a skeleton for them to follow. I recently saw a game advertised on my Xbox, Nike+ Kinect and saw how it displays a copy of the character doing something else like:
http://www.swaggerseek.com/wp-content/uploads/2012/06/fcb69__xboxkinect1.jpg
Or
http://www.swaggerseek.com/wp-content/uploads/2012/06/fcb69__xboxkinect.jpg
Can I create a point-cloud representation of the only the person detected (not any of the background)? Thanks in advance!
EDIT
Using this site, I can create point clouds, but still can't crop around the body of the person.
回答1:
It doesn't look like they are displaying a complete point cloud but rather a blue shaded intensity map. This could be done with the depth image from the Kinect for Windows sdk. What you are looking for is the player index. This is a provided bit in each pixel of the depth image. In order to get the player index bit you have to also enable the skeletal stream in your initialization code.
So this is how I would do it. I am modifying one of the Kinect for Windows SDK quickstarts found here load it up and make the following changes:
//Change image type to BGRA32
image1.Source =
BitmapSource.Create(depthFrame.Width, depthFrame.Height,
96, 96, PixelFormats.Bgra32, null, pixels, stride);
//hardcoded locations to Blue, Green, Red, Alpha (BGRA) index positions
const int BlueIndex = 0;
const int GreenIndex = 1;
const int RedIndex = 2;
const int AlphaIndex = 3;
//get player and depth at pixel
int player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;
int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
//check each pixel for player, if player is blue intensity.
if (player > 0)
{
pixels[colorIndex + BlueIndex] = 255;
pixels[colorIndex + GreenIndex] = intensity;
pixels[colorIndex + RedIndex] = intensity;
pixels[colorIndex + AlphaIndex] = 100;
}
else
{
//if not player make black and transparent
pixels[colorIndex + BlueIndex] = 000;
pixels[colorIndex + GreenIndex] = 000;
pixels[colorIndex + RedIndex] = 000;
pixels[colorIndex + AlphaIndex] = 0;
}
I like using this example for testing the colors since it still provides you with the depth viewer on the right side. I have attached an image of this effect running below:
The image to the left is the intensity map with slightly colored pixel level intensity data.
Hope that helps David Bates
回答2:
You can do a very simple triangulation of the points. Check this tutorial:
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/Terrain_basics.php
Check the result:
回答3:
This is not possible automatically with official Kinect SDK. But it is implemented in alternative SDK called OpenNI, there you can just get the set of points which of which user consists. If you don't want to use it I can suggest rather easy method of separating user from background. Since you know the z-position of user you can just take points which z is from 0 to userZ + some value representing thickness of body.
Another idea is walk over point cloud starting from some joint (or joints) and taking points only if distance is changing smoothly, because if you take background point, border body and next body point the distance drop will be easily noticeable. The problem here is that you will start counting floor as a part of body, because transition there is smooth, so you should validate it using lowest (ankle) joint.
Or you can use segmentation in PCL (http://docs.pointclouds.org/trunk/group__segmentation.html) but I don't know if the feet-floor problem is solved there. Looks like they are good with it (http://pointclouds.org/documentation/tutorials/planar_segmentation.php).
回答4:
Kinect for Windows SDK v1.5 has a sample that could be modified for this.
Sample names: depth-d3d or depthwithcolor-d3d.
They both do point clouds.
来源:https://stackoverflow.com/questions/11024182/point-cloud-of-body-using-kinect-sdk