openni

Convert kinects depth to RGB

余生长醉 提交于 2019-12-07 11:36:45
问题 I'm using OpenNI and OpenCV (but without the latest code with openni support). If I just send the depth channel to the screen - it will look dark and difficult to understand something. So I want to show a depth channel for the user in a color but cannot find how to do that without losing of accuracy. Now I do it like that: xn::DepthMetaData xDepthMap; depthGen.GetMetaData(xDepthMap); XnDepthPixel* depthData = const_cast<XnDepthPixel*>(xDepthMap.Data()); cv::Mat depth(frame_height, frame_width

Workaround to get codes to work in MSVC++ 2010 due to “Type name is not allowed”

风格不统一 提交于 2019-12-06 16:20:24
I am trying to implement a finger detection, which link is given here . Am I am going through the code in MSVC2010, it gives me error as shown in Figure as shown below. Could someone tell me why the following codes gives me error? Is this related to these following questions; 1 , 2 , 3 ? Is there a possible workaround? I already included: #include <cstdint> #include <stdint.h> I also tried: unsigned short depth = (unsigned short) (v[2] * 1000.0f); // hand depth unsigned short near = (unsigned short) (depth - 100); // near clipping plane unsigned short far = (unsigned short) (depth + 100); //

How should binding.gyp be written to build a Node.js addon with OpenNI?

十年热恋 提交于 2019-12-06 10:41:47
I'm trying to build a Node.js addon that makes use of OpenNI. I haven't used Node-gyp before so I'm trying to set up the binding.gyp file so that it includes the OpenNI library as part of the build. The code I'm actually compiling is just the Hello World example . The binding.gyp file I'm using is based on the one from NUIMotion on Github, which is doing something similar. Here's mine: { "targets": [ { "target_name": "onijs", "sources": [ "src/main.cpp" ], "include_dirs": [ "./src/Include" ], "libraries": [ "-lOpenNI2", "-Wl,-rpath ./" ] } ] } Here's what I've done (working in OSX): Created a

Emgu CV and the official Microsoft Kinect SDK?

拟墨画扇 提交于 2019-12-06 07:32:41
问题 Emgu CV currently allows the use of the Kinect with the OpenNI drivers. I've also seen that there exists an mssdk-openni bridge application to allow the Kinects running on the official Microsoft SDK to emulate OpenNI driven Kinects. Has anyone been successful in getting a Kinect running on the Microsoft SDK to work with Emgu CV, either with the mssdk-openni bridge or directly? Are there any tips on getting it running smoothly, or pitfalls to avoid? 回答1: Yeah. I've simply installed the SDK and

Convert kinects depth to RGB

冷暖自知 提交于 2019-12-05 16:01:36
I'm using OpenNI and OpenCV (but without the latest code with openni support). If I just send the depth channel to the screen - it will look dark and difficult to understand something. So I want to show a depth channel for the user in a color but cannot find how to do that without losing of accuracy. Now I do it like that: xn::DepthMetaData xDepthMap; depthGen.GetMetaData(xDepthMap); XnDepthPixel* depthData = const_cast<XnDepthPixel*>(xDepthMap.Data()); cv::Mat depth(frame_height, frame_width, CV_16U, reinterpret_cast<void*>(depthData)); cv::Mat depthMat8UC1; depth.convertTo(depthMat8UC1, CV

OpenNI Intrinsic and Extrinsic calibration

穿精又带淫゛_ 提交于 2019-12-05 14:13:05
How would one extract the components of the intrinsic and extrinsic calibration parameters from OpenNI for a device such as the PrimeSense? After some searching I only seem to find how to do it through ROS, but it is not clear how it would be done with just OpenNI. 来源: https://stackoverflow.com/questions/41110791/openni-intrinsic-and-extrinsic-calibration

Measuring distance between 2 points with OpenCV and OpenNI

狂风中的少年 提交于 2019-12-04 19:06:43
I'm playing with the built in OpenNI access within OpenCV 2.4.0 and I'm trying to measure the distance between two points in the depth map. I've tried this so far: #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> using namespace cv; using namespace std; Point startPt(0,0); Point endPt(0,0); void onMouse( int event, int x, int y, int flags, void* ) { if( event == CV_EVENT_LBUTTONUP) startPt = Point(x,y); if( event == CV_EVENT_RBUTTONUP) endPt = Point(x,y); } int main( int argc, char* argv[] ){ VideoCapture capture

Does Openni 2.2 support Kinect v2?

╄→гoц情女王★ 提交于 2019-12-04 14:35:46
问题 I'm using the new kinect on win8.1 and installed the Openni2 and NITE2, but they can't find my kinect. So what should I do to make it run? 回答1: OpenNI doesn't support Kinect (v1 or v2) directly. But you may install a driver for that. I have used successfully the Kinect v1 with OpenNI in windows and Linux... In windows it is easier, you only need to install the Kinect SDK 1.8 for v1... I haven't test it for v2 though, but I am almost sure it doesn't work... most probably you will need to wait

Emgu CV and the official Microsoft Kinect SDK?

这一生的挚爱 提交于 2019-12-04 14:10:49
Emgu CV currently allows the use of the Kinect with the OpenNI drivers . I've also seen that there exists an mssdk-openni bridge application to allow the Kinects running on the official Microsoft SDK to emulate OpenNI driven Kinects. Has anyone been successful in getting a Kinect running on the Microsoft SDK to work with Emgu CV, either with the mssdk-openni bridge or directly? Are there any tips on getting it running smoothly, or pitfalls to avoid? Yeah. I've simply installed the SDK and could capture and extract bitmaps of the video stream. The MSSDK for Kinect works just fine and easy. You

SimpleOpenNI Record and Replay User Tracking Data

倾然丶 夕夏残阳落幕 提交于 2019-12-04 13:33:48
问题 I am able to use SimpleOpenNI to successfully record and replay depth and rgb recordings (.oni files). I would also like to be able to track users from recorded files, in other words be able to easily extract sillhouettes of people from a depth image. This is easy to do with SimpleOpenNI when running connected to a sensor, by calling enableUser() in the setup() method, and then obtaining userMap() or userImage() during draw calls. The motivation for this is to be able to easily segment out a