Generate and export point cloud from Project Tango

放肆的年华 提交于 2019-12-06 15:25:39
Pado

Yes, you have to use TangoPoseData.

I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.

Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.

First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).

private DeviceExtrinsics setupExtrinsics(Tango mTango) {
    //camera to IMU tranform
    TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
    framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
    framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
    TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
    //IMU to device transform
    framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
    TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
    //IMU to depth transform
    framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
    TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
    return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}

Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:

public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
    ArrayList<Vector3> normalizedCloud = new ArrayList<>();

    TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());

    while (cloud.xyz.hasRemaining()) {
        Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
                new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
                camera_T_imu,
                cameraPose
        );
        normalizedCloud.add(rotatedV);
    }

    return normalizedCloud;
}

This should be enough, now you have a point cloud wrt you base frame of reference. If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.

There is another way to do this with rotation matrix, explained here.

My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.

Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).

Drifting

There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.

I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning. If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.

Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area. Here's my code:

private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS = 
    new ArrayList<TangoCoordinateFramePair>();
{
    FRAME_PAIRS.add(new TangoCoordinateFramePair(
            TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
            TangoPoseData.COORDINATE_FRAME_DEVICE
    ));
}

Now you can use this FRAME_PAIRS as usual.

Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:

  • TangoConfig.KEY_BOOLEAN_LEARNINGMODE
  • TangoConfig.KEY_STRING_AREADESCRIPTION

Here's how I initialize TangoConfig in my app:

TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true); 

Using this technique you'll get rid of those spreads.

PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!