问题
I am working with some data that another person recorded using the OpenNI recorder module. Unfortunately, they accidentally set the mirrored capability as on during their recording, so I am having a few problems 1. mirroring the depth using MirrorCap and 2. aligning the depth with the rgb using AlternateViewPointCap. I tried accessing these capabilities from my depth node as follows:
xn::Context ni_context;
xn::Player player;
xn::DepthGenerator g_depth;
xn::ImageGenerator g_image;
ni_context.Init();
ni_context.OpenFileRecording(oni_filename, player);
ni_context.FindExistingNode(XN_NODE_TYPE_DEPTH, g_depth);
ni_context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_image);
g_depth.GetMirrorCap().SetMirror(false);
g_depth.GetAlternativeViewPointCap().SetViewPoint(g_image);
However, this did not work. Even after I set the mirror to false, the IsMirrored() command on g_depth still returns as true, and the alternateviewpointcap is not changing the depthmap I receive from the generator.
I also tried doing it through a mock node:
xn::MockDepthGenerator m_depth;
m_depth.CreateBasedOn(g_depth);
m_depth.GetMirrorCap().SetMirror(false);
m_depth.GetAlternativeViewPointCap().SetViewPoint(g_image);
xn::DepthMetaData temp;
g_depth.GetMetaData(temp);
m_depth.SetMetaData(temp);
This also does not affect the depth map I get from m_depth. I'd appreciate any and all suggestions for how to make my color and depth information align, NO MATTER HOW HACKY. This data is difficult to record and I need to use it one way or another.
My current solution is to create the mock depth node and flip all of the pixels using my own routine, before setting it with the SetMetaData function. Then, I use OpenCV to create a perspective transform from the RGB image to the depth image by having a user click 4 points. I then apply this transform to the rgb frame to make the values line up. It's not perfect but it works - however, for the sake of other people who might need to use the data, I want to make a more proper fix.
回答1:
Unfortunately, some design decisions in OpenNI were apparently influenced by the Primesense SoC - specifically, the SoC can do depth->RGB registration and mirroring in hardware. Unfortunately, this means the output of the generators when you recorded is what you have. Sorry.
From looking at the code in the Primesense driver to see how they do registration (XnDeviceSensorV2/Registration.cpp), it looks like they don't export the lens parameters in a way you can access from OpenNI, which is unfortunate. The only hacky solution I see is modifying and recompiling the driver to export the data (note that this is user-mode code so it isn't that bad. You'll probably want to fork Avin2's SensorKinect).
Also, FYI - mock generators don't do any processing themselves - the NiRecordSynthetic sample shows an example of how mock nodes are intended to be used.
回答2:
Adding to Roee's answer, you can access the lens data from the Kinect using OpenNI, it is just a bit tricky: you have to know the name and type of the things you are looking for. For instance, this code extracts the ZeroPlanePixelSize and ZeroPlaneDistance of a depth generator which are used later to transform the projective points into real world points (they change from device to device).
XnUInt64 zpd;
XnDouble zpps;
g_DepthGenerator.GetIntProperty("ZPD", zpd);
g_DepthGenerator.GetRealProperty("ZPPS", zpps);
You can probably get the algorithms and kinect parameters you need by looking at the Avin2Sensor files and finding where that depth to RGB viewpoint transformation is actually done.
来源:https://stackoverflow.com/questions/9677612/possible-to-change-alternateviewpointcap-or-mirrorcap-from-playback