I am working on a Kinect project using the infrared view and depth view. In the infrared view, using CVBlob library, I am able to extract some 2D points of interest. I want
There is no offset between the "IR View" and "Depth View". Primarily because they are the same thing.
The Kinect has 2 cameras. A RGB color camera and a depth camera, which uses an IR blaster to generate a field light field that is used when processing the data. These give you a color video stream and a depth data stream; there is no "IR view" separate from the depth data.
UPDATE:
They are actually the same thing. What you are referring to as a "depth view" is simply a colorized version of of the "IR view"; the black-and-white image is the "raw" data, while the color image is a processed version of the same data.
In the Kinect for Windows Toolkit, have a look in the KinectWpfViewers
project (if you installed the KinectExplorer-WPF
example, it should be there). In there is the KinectDepthViewer
and the DepthColorizer
classes. They will demonstrate how the colorized "depth view" is created.
UPDATE 2:
Per comments below what I've said above is almost entirely junk. I'll likely go edit it out or just delete my answer in full in the near future, until then it shall stand as a testament to my once invalid beliefs on what was coming from where.
Anyways... Have a look at the CoordinateMapper class as another possible solution. The link will take you to the managed code docs (which is what I'm familiar with), I'm looking around the C++ docs to see if I can find the equivalent.
I've used this to map the standard color and depth views. It may also map the IR view just as well (I wouldn't see why not), but I'm not 100% sure of that.
Depth and Color streams are not taken from the same point so they do not correspond to each other perfectly. Also they FOV (field of view) is different.
cameras
my corrections for 640x480 resolution for both streams
if (valid depth)
{
ax=(((x+10-xs2)*241)>>8)+xs2;
ay=(((y+30-ys2)*240)>>8)+ys2;
}
x,y
are in coordinates in depth imageax,ay
are out coordinates in color imagexs,ys = 640,480
xs2,ys2 = 320,240
as you can see my kinect has also y-offset which is weird (even bigger then x-offset). My conversion works well on ranges up to 2 m
I did not measure it further but it should work even then
do not forget to correct space coordinates from depth and depth image coordinates
pz=0.8+(float(rawdepth-6576)*0.00012115165336374002280501710376283);
px=-sin(58.5*deg*float(x-xs2)/float(xs))*pz;
py=+sin(45.6*deg*float(y-ys2)/float(ys))*pz;
pz=-pz;
px,py,pz
is point coordinate in [m] in space relative to kinectI use coordinate system for camera with opposite Z direction therefore the negation of sign
PS. I have old model 1414 so newer models have probably different calibration parameters
I created a blog showing the IR and Depth views:
http://aparajithsairamkinect.blogspot.com/2013/06/kinect-infrared-and-depth-views_6.html
This code works for many positions of the trackers from the Kinect:
coordinates3D[0] = coordinates2D[0];
coordinates3D[1] = coordinates2D[1];
coordinates3D[2] = (USHORT*)(LockedRect.pBits)
[(int)(coordinates2D[1] + 23) * Width + (int)coordinates2D[0]] >> 3;