google-project-tango

Mobile App - Using both Project Tango and Ionic and/or angular

╄→гoц情女王★ 提交于 2019-12-11 04:40:44
问题 I am looking for guidance on a mobile app project I am working on. Most of the app can be delivered using angular.js (and perhaps ionic) javascript technology. For one component though I need to integrate with an API, specifically google project tango API. This API is only available in java, C or unity (https://get.google.com/tango/developers/). Im aware I am a little out of my depth, but is it possible to use both technologies in the same mobile app, and if so can you provide some guidance

Issue setting up the development environment for ProjectTango developemnt Device

为君一笑 提交于 2019-12-10 17:46:28
问题 I have a Project Tango development Kit. I am interested in working on the depth data from the sensors. I have ADB setup on my machine. But the Eclipse android emulator doesn't detect the Tango Development tablet. Can anyone suggest me how to set things up for the device!. Thank you in advance. 回答1: Please ensure that USB debugging is enabled in order to enter ADB, go to Settings > About tablet > Build number and then press Build number seven times. Then press back and go to Developer options

Future prospects for improvement of depth data on Project Tango tablet

末鹿安然 提交于 2019-12-09 13:21:02
问题 I am interested in using the Project Tango tablet for 3D reconstruction using arbitrary point features. In the current SDK version, we seem to have access to the following data. A 1280 x 720 RGB image. A point cloud with 0-~10,000 points, depending on the environment. This seems to average between 3,000 and 6,000 in most environments. What I really want is to be able to identify a 3D point for key points within an image. Therefore, it makes sense to project depth into the image plane. I have

Google Tango: Aligning Depth and Color Frames

*爱你&永不变心* 提交于 2019-12-09 00:17:38
问题 I would like to align a (synchronous) depth/color frame pair, using the Google Tango tablet, such that, assuming that both frames have the same resolution, each pixel in the depth frame corresponds to the same pixel in the color frame, i.e., I would like to achieve a retinotopic mapping. How can this be achieved using the latest C API (Hilbert Release Version 1.6)? Any help on this will be greatly appreciated. 回答1: Generating simple crude UV coordinates to map tango point cloud points back

Relocation of an ADF in Learning mode not working?

天大地大妈咪最大 提交于 2019-12-08 18:29:23
I have a strange behaviour when trying to append to an existing ADF: I'm loading an ADF which was just recorded and the device can easy relocate on. Once I load the same ADF with learning mode on (in order to extend the existing ADF) the device cannot relocate on it. It's easy to reproduce (see the link to the video): - Record an ADF - Load it, make sure the device can re-locate - Load it again with learning mode "on", the device can no longer re-locate on it I tried the explorer-app the java area-learning sample as well as the unity area learning sample. In my own Application I do check the

Google Project Tango NDK undefined reference on functions

痴心易碎 提交于 2019-12-08 12:06:51
问题 I am getting a compile error: undefined reference to 'TangoService_getConfig' (MoreTeapotsNativeActivity.cpp) ld returned 1 exit status (collect2.exe) I am working with the tango sdk TangoSDK_Ikariotikos_C.zip in Visual Studio 2015 using VisualGDB. I have also replicated the error in Android Studio so it isn't IDE specific. I have started with an NDK sample project to test a native activity deploys correctly and reduce complexity whilst troubleshooting. I have used VisualGDB

Is it possible to acquire raw IMU and RGB-D data with Google Tango?

泪湿孤枕 提交于 2019-12-08 10:48:40
问题 In my Lab, we are considering buying a Google Tango development kit. But first we would like to make sure that we can get what we need out of it. From what I could check online, it is possible to acquire the pose estimate calculated by the device, along with the point-clouds acquired by its RGB-D camera. However, so far I could not find any references to acquiring raw IMU data, meaning the raw values obtained from the device's accelerometers and gyroscopes. Ideally, the raw IMU and RGB-D data

Transforming and registering point clouds

不打扰是莪最后的温柔 提交于 2019-12-08 08:37:06
问题 I’m starting to develop with Project Tango API. I need to save PointCloud data that I get in the event OnXyzIjAvailable; to do this, I started from your example "PointCloudJava" and wrote PointCloud coordinates in single files (an AsyncTask is started for this purpose). So I have one file with xyz for each event. On the same event I get the corresponding transformation matrix (mRenderer.getModelMatCalculator(). GetPointCloudModelMatrixCopy()). Point clouds Then I’ve imported all this data

Point Cloud Unity example only renders points for the upper half of display

风格不统一 提交于 2019-12-08 08:21:28
问题 We are trying to get the Point Cloud Unity example to work. We tried both the example from the git: https://github.com/googlesamples/tango-examples-unity/tree/master/UnityExamples/Assets/TangoSDK/Examples/PointCloud As well as the Depth Perception Tutorial: https://developers.google.com/project-tango/apis/unity/unity-prefab-depth But we only get points rendered on the upper half of the screen for some reason. When we quickly tilt our device up we can see more points but as soon as the

Generate and export point cloud from Project Tango

流过昼夜 提交于 2019-12-08 07:18:54
问题 After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github. Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3