openni

OpenNI and OpenCV: cv2.imshow() crashes with error: (-215:Assertion failed) dst.data == (uchar*)dst_ptr in function 'cvShowImage'

烈酒焚心 提交于 2019-11-29 14:30:40
I am trying to receive a depth image from an Orbbec Astra Pro camera connected to a Windows 10 machine. I have therfor installed opencv-python 4.0.0.21 and primesense 2.2.0.30.post5 which seems to be latest stable python packages available. This is the code snippet I am experimenting with: import numpy as np import cv2 from primesense import openni2 from primesense import _openni2 as c_api openni2.initialize("./OpenNI-Windows-x64-2.3/Redist") if openni2.is_initialized(): print('openni2 ready') else: print('openni2 not ready') dev = openni2.Device.open_any() depth_stream = dev.create_depth

How can I access the Kinect/device via OpenNI?

别等时光非礼了梦想. 提交于 2019-11-29 12:15:34
I was looking over the documentation trying to find anything that will allow me the Kinect/device? I'm trying to get accelerometer data, but not sure how. So far there were two things I've spotted in the guide and docs: XnModuleDeviceInterface/xn::ModuleDevice and XnModuleLockAwareInterface/xn::ModuleLockAwareInterface . I'm wondering if I can use the ModuleDevice Get/Set methods to talk to the device and ask for accelerometer data. If so, how can I get started? Also, I was thinking, if it would be possible to 'lock' openni functionality temporarily while I try to get accelerometer data via

How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)

折月煮酒 提交于 2019-11-29 04:51:55
We captured a 3d Image using Kinect with OpenNI Library and got the rgb and depth images in the form of OpenCV Mat using this code. main() { OpenNI::initialize(); puts( "Kinect initialization..." ); Device device; if ( device.open( openni::ANY_DEVICE ) != 0 ) { puts( "Kinect not found !" ); return -1; } puts( "Kinect opened" ); VideoStream depth, color; color.create( device, SENSOR_COLOR ); color.start(); puts( "Camera ok" ); depth.create( device, SENSOR_DEPTH ); depth.start(); puts( "Depth sensor ok" ); VideoMode paramvideo; paramvideo.setResolution( 640, 480 ); paramvideo.setFps( 30 );

Finger/Hand Gesture Recognition using Kinect

流过昼夜 提交于 2019-11-28 18:29:35
Let me explain my need before I explain the problem. I am looking forward for a hand controlled application. Navigation using palm and clicks using grab/fist. Currently, I am working with Openni, which sounds promising and has few examples which turned out to be useful in my case, as it had inbuild hand tracker in samples. which serves my purpose for time being. What I want to ask is, 1) what would be the best approach to have a fist/grab detector ? I trained and used Adaboost fist classifiers on extracted RGB data, which was pretty good, but, it has too many false detections to move forward.

OpenNI and OpenCV: cv2.imshow() crashes with error: (-215:Assertion failed) dst.data == (uchar*)dst_ptr in function 'cvShowImage'

六眼飞鱼酱① 提交于 2019-11-28 08:38:53
问题 I am trying to receive a depth image from an Orbbec Astra Pro camera connected to a Windows 10 machine. I have therfor installed opencv-python 4.0.0.21 and primesense 2.2.0.30.post5 which seems to be latest stable python packages available. This is the code snippet I am experimenting with: import numpy as np import cv2 from primesense import openni2 from primesense import _openni2 as c_api openni2.initialize("./OpenNI-Windows-x64-2.3/Redist") if openni2.is_initialized(): print('openni2 ready'

How to initialize multiple OpenNI sensors with OpenCV

自闭症网瘾萝莉.ら 提交于 2019-11-28 06:59:10
问题 I'd like to use two Asus Xtion Pro sensors with OpenCV (2.4.4) and not sure how to initialize both devices. I can initialize one like so: VideoCapture capture; capture.open(CV_CAP_OPENNI); How can I initialize two VideoCapture instances for two separate sensors ? 回答1: Turns out it's as simple as this: VideoCapture sensor1;sensor1.open(CV_CAP_OPENNI_ASUS); VideoCapture sensor2;sensor2.open(CV_CAP_OPENNI_ASUS+1); A very basic runnable example is: #include "opencv2/core/core.hpp" #include

How can I access the Kinect/device via OpenNI?

醉酒当歌 提交于 2019-11-28 06:20:30
问题 I was looking over the documentation trying to find anything that will allow me the Kinect/device? I'm trying to get accelerometer data, but not sure how. So far there were two things I've spotted in the guide and docs: XnModuleDeviceInterface/xn::ModuleDevice and XnModuleLockAwareInterface/xn::ModuleLockAwareInterface . I'm wondering if I can use the ModuleDevice Get/Set methods to talk to the device and ask for accelerometer data. If so, how can I get started? Also, I was thinking, if it

OpenNI 2 and Visual Studio 2012

自闭症网瘾萝莉.ら 提交于 2019-11-27 16:25:42
问题 I just downloaded OpenNI 2 SDK (www.openni.org) and I am trying to setup a project in Visual Studio 2012. What I did: Create a new C++ Win32 Console Application Project Go to Project>MyProject Properties and, in Configuration Properties>VC++ Directories ... Added C:\Program Files (x86)\OpenNI2\Redist\; to Executable Directories Added C:\Program Files (x86)\OpenNI2\Include\; to Include Directories Added C:\Program Files (x86)\OpenNI2\Redist\; to Reference Directories Added C:\Program Files

Finger/Hand Gesture Recognition using Kinect

落爺英雄遲暮 提交于 2019-11-27 11:23:21
问题 Let me explain my need before I explain the problem. I am looking forward for a hand controlled application. Navigation using palm and clicks using grab/fist. Currently, I am working with Openni, which sounds promising and has few examples which turned out to be useful in my case, as it had inbuild hand tracker in samples. which serves my purpose for time being. What I want to ask is, 1) what would be the best approach to have a fist/grab detector ? I trained and used Adaboost fist