google-project-tango

Accessing Color Frames with Unity and Tango 'Leibniz'

安稳与你 提交于 2019-12-04 19:14:34
I'm just starting to tinker with Tango and Unity. Unfortunately there doesn't seem to be any documentation or examples on how to access color data in Unity with the latest release. I've been using the motion tracking example from GitHub ( https://github.com/googlesamples/tango-examples-unity ) as a starting point, trying to read incoming color frames the same way pose and depth data are read. I'm assuming the best way is to go through the "ITangoVideoOverlay" interface and "OnTangoImageAvailableEventHandler" callback. All I am trying to do right now is to get the

Error:Execution failed for task ':hello_motion_tracking:transformNative_libsWithStripDebugSymbolForDebug'. >

痴心易碎 提交于 2019-12-04 16:53:30
When I tried to launch the project in the tango tutorial, a mistake pops out like this. Where should I look into to fix this problem? Info shown on Android: As mentioned above, this indeed is a compatibility issue with Android Studio 2.2. A workaround is to set both targetSdkVersion and compileSdkVersion to 22. This seems to be an incompatibility between the current version of the samples and android gradle plug-in version 2.2.1, which is the one that Android Studio kindly offers to upgrade the project to when you import it. Could you please try downgrading to android gradle plug-in version

Cannot update Tango Core - “Package file was not signed correctly”

笑着哭i 提交于 2019-12-04 11:37:08
Got my Tango tablet last night and tried to get it going by installing required packages. Updating "Project Tango Core" app failed. Here's the error I get: Anybody else seen this and know how to fix it? Here's some info that may be required: Looking at the build number that you have pictured above, this is an old OS build. In order to fix this issue you need to perform an update on the system software (Settings->About Tablet->System Updates). Once you have updated to the latest software version, currently Cantor, this issue should resolve itself. Edit : If your tablet does not see the update

Project Tango Pose data producing drift while stationary AND in motion

六月ゝ 毕业季﹏ 提交于 2019-12-04 10:17:30
I am creating an augmented reality app using the Project Tango. An essential part of this is accurate position tracking. Of course I understand that no inertial tracking system is perfect, but the Tango seems to have worked pretty well so far. However, in the past few days, the translation data (x, y, z) from the Tango appears to be experiencing slight drift, even when the device is held stationary. I have the device writing X, Y, and Z coords to the screen, and when the device is sitting still, with nothing in its field of view changing, the X value slowly rises, and the Y and Z values slowly

Merging Area Description Files for Project Tango

吃可爱长大的小学妹 提交于 2019-12-04 02:21:56
问题 May I append one ADF to another? According to the docs: Depending on your settings, you can learn a new area or append to an existing ADF. Similar to the way Tango is able to learn more about an area that has been localized, I would like Tango to learn more about an area that has been localized by appending an existing ADF that is related. Tango would look for overlapping information that would relate the two files and translate the coordinates so the file being appended would use the

How do I begin working on the Project Tango?

痞子三分冷 提交于 2019-12-03 17:10:51
after a couple of weeks I have been unable to get the android set of tools to a functioning level with c++ before and have been given the opportunity of using a project tango, and though that sounds awesome and wondrous and would open a world of opportunity for working with VR... I feel like I am stuck at step -4. My understanding is limited, so bear with me. I stumbled upon the PCL built for running algorithms on point cloud data, it was open source and appeared like a wonderful solution, it is written in C++ and I have a mild understanding of both c++ and java. I have tried using Eclipse and

Future prospects for improvement of depth data on Project Tango tablet

安稳与你 提交于 2019-12-03 15:57:02
I am interested in using the Project Tango tablet for 3D reconstruction using arbitrary point features. In the current SDK version, we seem to have access to the following data. A 1280 x 720 RGB image. A point cloud with 0-~10,000 points, depending on the environment. This seems to average between 3,000 and 6,000 in most environments. What I really want is to be able to identify a 3D point for key points within an image. Therefore, it makes sense to project depth into the image plane. I have done this, and I get something like this: The problem with this process is that the depth points are

How to take high-res picture while sensing depth using project tango

纵饮孤独 提交于 2019-12-02 14:14:44
问题 How take picture using project tango ? I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ? Note that java API public void onFrameAvailable(int cameraId) { if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) { mTangoCameraPreview.onFrameAvailable(); } } does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will

Save frame from TangoService_connectOnFrameAvailable

旧城冷巷雨未停 提交于 2019-12-02 11:58:36
How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this: static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer) { ... std::ofstream fp; fp.open(imagefile, std::ios::out | std::ios::binary ); int offset = 0; for(int i = 0; i < buffer->height*2 + 1; i++) { fp.write((char*)(buffer->data + offset), buffer->width); offset +

(Project Tango) Rotation and translation of point clouds with area learning

孤街浪徒 提交于 2019-12-02 10:45:14
I have a java application that, when I press a button, records point clouds xyz coordinates together with the right pose. What I want is to pick an object, record a pointCloud in the front and one in the back, then merge the 2 clouds. Obviously to get a reasonable result I need to translate and rotate one or both the clouds I recorded. But I'm new to Tango Project and there are some things I should be missing. I have read about this in this post . There, @Jason Guo talks about those matrix: start_service_T_device , imu_T_device , imu_T_depth How could I get them? Should i use