kinect-sdk

How to move a kinect skeleton to another position

北战南征 提交于 2019-12-03 12:50:16
I am working on a extension method to move one skeleton to a desired position in the kinect field os view. My code receives a skeleton to be moved and the destiny position, i calculate the distance between the received skeleton hip center and the destiny position to find how much to move , then a iterate in the joint applying this factor. My code, actualy looks like this. public static Skeleton MoveTo(this Skeleton skToBeMoved, Vector4 destiny) { Joint newJoint = new Joint(); ///Based on the HipCenter (i dont know if it is reliable, seems it is.) float howMuchMoveToX = Math.Abs(skToBeMoved

How to track eyes using Kinect SDK?

喜夏-厌秋 提交于 2019-12-03 12:39:06
The requirement is to define a rectangle around each eye in 3D space. There should be a way to track eyes using the Microsoft Kinect SDK. According to this The Face Tracking SDK uses the Kinect coordinate system to output its 3D tracking results. The origin is located at the camera’s optical center (sensor), Z axis is pointing towards a user, Y axis is pointing up. The measurement units are meters for translation and degrees for rotation angles. Adding ... Debug3DShape("OuterCornerOfRightEye", faceTrackFrame.Get3DShape() [FeaturePoint.OuterCornerOfRightEye]); Debug3DShape("InnerCornerRightEye"

Hand over button event in Kinect SDK 1.7

人走茶凉 提交于 2019-12-02 04:33:49
I am creating a WPF application using Kinect SDK 1.7 and I need to count how many times user place hand over the button (not push, just place over). I found only an event responsible for pushing the button in XAML <k:KinectTileButton Label="Click" Click="PushButtonEvent"></k:KinectTileButton> I can't find which event is responsible for placing hand over the button (if this event exists). Maybe you've got some idea which event would do that? Or how to resolve this problem in another way? The KinectTileButton supports the follow events for the hand cursor, which can be subscribed to and acted

How to convert Kinect raw depth info to meters in Matlab?

被刻印的时光 ゝ 提交于 2019-11-30 16:20:36
I have made some research here to understand this topic but I have not achieved good results. I'm working with a Kinect for Windows and the Kinect SDK 1.7. I'm working with matlab to process raw depth map info. First, I'm using this method ( https://stackoverflow.com/a/11732251/3416588 ) to store Kinect raw depth data to a text file. I got a list with (480x640 = 307200) elements and data like this: 23048 23048 23048 -8 -8 -8 -8 -8 -8 -8 -8 6704 6720 6720 6720 6720 6720 6720 6720 6720 6736 6736 6736 6736 6752 0 0 Then in Matlab I convert this values to binary. So, I get 16-bits numbers. The

Kinect skeleton Scaling strange behaviour

孤街浪徒 提交于 2019-11-29 23:33:06
问题 I am trying to scale a skeleton to match to the sizes of another skeleton. My algoritm do the following: Find the distance between two joints of the origin skeleton and the destiny skeleton using phytagorean teorem divide this two distances to find a multiply factor. Multiply each joint by this factor. Here is my actual code: public static Skeleton ScaleToMatch(this Skeleton skToBeScaled, Skeleton skDestiny) { Joint newJoint = new Joint(); double distanciaOrigem = 0; double distanciaDestino =

Kinect mapping color image to depth image in MATLAB

自古美人都是妖i 提交于 2019-11-29 16:47:05
I have collected data using Kinect v2 sensor and I have a depth map together with its corresponding RGB image. I also calibrated the sensor and obtained the rotation and translation matrix between the Depth camera and RGB camera. So I was able to reproject the depth values on the RGB image and they match. However, since the RGB image and the depth image are of different resolutions, there are a lot of holes in the resulting image. So I am trying to move the other way, i.e. mapping the color onto the depth instead of depth to color. So the first problem I am having is that the RGB image has 3

Kinect for Windows v2 depth to color image misalignment

耗尽温柔 提交于 2019-11-29 11:49:13
问题 currently I am developing a tool for the Kinect for Windows v2 (similar to the one in XBOX ONE). I tried to follow some examples, and have a working example that shows the camera image, the depth image, and an image that maps the depth to the rgb using opencv. But I see that it duplicates my hand when doing the mapping, and I think it is due to something wrong in the coordinate mapper part. here is an example of it: And here is the code snippet that creates the image (rgbd image in the

Kinect: Converting from RGB Coordinates to Depth Coordinates

£可爱£侵袭症+ 提交于 2019-11-29 00:43:57
I am using the Windows Kinect SDK to obtain depth and RGB images from the sensor. Since the depth image and the RGB images do not align, I would like to find a way of converting the coordinates of the RGB image to that of the depth image, since I want to use an image mask on the depth image I have obtained from some processing on the RGB image. There is already a method for converting depth coordinates to the color space coordinates: NuiImageGetColorPixelCoordinatesFromDepthPixel unfortunately, the reverse does not exist. There is only an arcane call in INUICoordinateMapper: HRESULT

Kinect mapping color image to depth image in MATLAB

巧了我就是萌 提交于 2019-11-28 10:36:40
问题 I have collected data using Kinect v2 sensor and I have a depth map together with its corresponding RGB image. I also calibrated the sensor and obtained the rotation and translation matrix between the Depth camera and RGB camera. So I was able to reproject the depth values on the RGB image and they match. However, since the RGB image and the depth image are of different resolutions, there are a lot of holes in the resulting image. So I am trying to move the other way, i.e. mapping the color