kinect-sdk

Saving raw detph-data

Deadly 提交于 2019-12-06 13:47:48
I am trying to save my kinect raw depth-data and i dont want to use the Kinect Studio, because i need the raw-data for further calculations. I am using the kinectv2 and kinect sdk! My problem is that i just get low FPS for the saved data. Its about 15-17FPS. Here my Framereader ( in further steps i want to save colorstream also): frameReader = kinectSensor.OpenMultiSourceFrameReader(FrameSourceTypes.Depth); frameReader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived; Here the Event: void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e) { var reference

How to do Joint tracking in Kinect with a scaled Image

做~自己de王妃 提交于 2019-12-06 09:56:59
问题 I trying to do some Joint Tracking with kinect (just put a ellipse inside my right hand) everything works fine for a default 640x480 Image, i based myself in this channel9 video. My code, updated to use the new CoordinateMapper classe is here ... CoordinateMapper cm = new CoordinateMapper(this.KinectSensorManager.KinectSensor); ColorImagePoint handColorPoint = cm.MapSkeletonPointToColorPoint(atualSkeleton.Joints[JointType.HandRight].Position, ColorImageFormat.RgbResolution640x480Fps30);

Replace pannable world map image by Google Earth globe in Kinect sample

人走茶凉 提交于 2019-12-06 02:18:05
I need basic guidance on how to control Google Earth using Kinect Hand gestures instead of using mouse/keyboard navigation. I have ran Kinect Developer toolkit sample, and there is one named interactive gallery in C# that allows panning world map (It's a static image). Here is the link to it's documentation, Kinect interactive gallery I exactly want to create the same thing but allowing the world globe to be moved,zoom and rotate using hand gestures instead of a simple map image. Currently I ran Google Earth in WPF application using a google earth plugin and using it via web browser component.

Kinect emulation w/o actual device plugged

耗尽温柔 提交于 2019-12-05 06:03:00
Is it possible to emulate Kinect sensor (for usage with Kinect SDK), when the Kinect itself isn't plugged? First I thought Kinect Studio does exactly what I wanted, but now it seems like Kinect Studio records data streams and can "feed" them to application, but is unable to emulate connection to the sensor. So at the moment I have a couple of .xed files recorded with Kinect Studio and I can't launch any Kinect-enabled apps without getting "Kinect sensor is not plugged in" or whatever message. Is there any way around this? I have access to Kinect, but it's not at the same place I intend to

Using KinectColorViewer in SDK1.5

痞子三分冷 提交于 2019-12-04 19:16:04
I am trying to use a KinectColorViewer in a project using Kinect for windows (sdk 1.5). In the kinect explorer example, the KinectColorViewer componant had a KinectSensorManager component that is binded. In the xaml file we have: <kt:KinectColorViewer x:Name="ColorViewer" KinectSensorManager="{Binding KinectSensorManager}" CollectFrameRate="True" RetainImageOnSensorChange="True" /> I have a lot of trouble reproduccing the same concept in other projects. I have used the Microsoft.Kinect.Toolkit's KinectSensorChooser, KinectSensorChooserUI and the Mirosoft.Sampels.Kinect.wpfviewers

How to do Joint tracking in Kinect with a scaled Image

放肆的年华 提交于 2019-12-04 17:19:45
I trying to do some Joint Tracking with kinect (just put a ellipse inside my right hand) everything works fine for a default 640x480 Image, i based myself in this channel9 video. My code, updated to use the new CoordinateMapper classe is here ... CoordinateMapper cm = new CoordinateMapper(this.KinectSensorManager.KinectSensor); ColorImagePoint handColorPoint = cm.MapSkeletonPointToColorPoint(atualSkeleton.Joints[JointType.HandRight].Position, ColorImageFormat.RgbResolution640x480Fps30); Canvas.SetLeft(elipseHead, (handColorPoint.X) - (elipseHead.Width / 2)); // center of the ellipse in center

Visual Studio 2013 and Kinect SDK 2.0 Cannot find or include <NuiApi.h>

十年热恋 提交于 2019-12-04 16:18:47
I am learning Kinect development using C++ in Visual Studio 2013 (Desktop version on Windows 8.1). I have downloaded the Kinect SDK 2.0 from Microsoft. According to my understanding, NuiApi.h is part of Kinect SDK 2.0. However, I cannot include it (#include says Cannot open source file). Have searched my computer for the file but couldn't find it. Reinstalled Kinect SDK with no luck. Below is the related part of the code: #include<iostream> #include<Windows.h> #include<kinect.h> #include<NuiApi.h> A similar header, NuiKinectFusionApi.h, can be included without a problem. You are mixing the 2

Finger tracking in Kinect

梦想的初衷 提交于 2019-12-04 15:28:39
I was exploring development on kinect, and wanted to be able to recognize fingers rather than the entire hand only. The skeletal API by kinect official SDK only has the hand joint - no provisions for finger tracking. I also read that very recently Microsoft has included the grip recognition API in the new sdk and might include finger tracking in future releases. My question is given the current resources, how do i go about to do finger tracking ? Do we have external libraries for the same ? Will it be feasible to actually implement finger tracking using kinect, given the fact the UX guidelines

How to get mesh from kinect fracetrack?

試著忘記壹切 提交于 2019-12-04 15:09:04
How do I get the kinect facetracking mesh? this is the mesh: http://imgur.com/TV6dHBC I have tried several ways, but could not make it work. e.g.: http://msdn.microsoft.com/en-us/library/jj130970.aspx Snowman 3D Face Model Provided by IFTModel Interface The Face Tracking SDK also tries to fit a 3D mask to the user’s face. The 3D model is based on the Candide3 model ( http://www.icg.isy.liu.se/candide/ ) : Note: This model is not returned directly at each call to the Face Tracking SDK, but can be computed from the AUs and SUs. There is no direct functionality to do that. You have to use the

How can i pass kinect tracking into another form

北城以北 提交于 2019-12-04 05:48:02
问题 I have a kinect project in wpf and it uses skeleton stream that tracks the left and right hand of its users and allows me to hover over buttons. I tried making a new form and just copying and pasting everything so i can create a new page but it didnt work, i think i may have to reference the methods used in the main page, but i am unsure. I want to be able to use the skeleton stream alongside the hovering method in a new window Any help would be appreciated - i apologize if this does not make