问题
As it is shown in Project Tango GTC Video, some local features are extracted and tracked for motion estimation that is then fused with accelerometer data.
Since any developer may need to track features to develop his/her apps, I was wondering if there would be a way to get those features through APIs
.
Although it is possible to extract some point and retrieve their flow using estimated 6DOF
pose returned by the APIs
, it adds extra overhead. Another issue with this approach is that the pure visual flow (including outliers) is not achievable and is influenced by IMU
data.
So my question is that if these features are tracked using hardware-accelerated algorithms, how can we get them using APIs
without having to implement it and do a redundant task.
Any answer and suggestion would be appreciated.
回答1:
It is straightforward to compile OpenCV for the Tango with nVidia's TADP package. Use 3.0r4. You may need to merge some OpenCV-4-Android bits but it's easy, and the ES examples will fail on the device but don't sweat it.
回答2:
Google released the "Project Tango ADF Inspector" on play store - I haven't actually had any time to play with it, but its the first thing to offer any look inside that data - I think Google considers this data sensitive and is cautious in this area, with good reason - If you look for the starred "important" note on this page you should get a feel for the sensitivity of that issue.
来源:https://stackoverflow.com/questions/30601755/how-is-it-possible-to-get-tracked-features-from-tango-apis-used-for-motion-track