Ptoject Tango provides a point cloud, how can you get the position in pixels of a 3D point in the point cloud in meters?
I tried using the projection matrix but I get very small values (0.5,1.3 etc) instead of say 1234,324 (in pixels).
I include the code I have tried
//Get the current rotation matrix Matrix4 projMatrix = mRenderer.getCurrentCamera().getProjectionMatrix(); //Get all the points in the pointcloud and store them as 3D points FloatBuffer pointsBuffer = mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer().floatBuffer; Vector3[] points3D = new Vector3[pointsBuffer.capacity()/3]; int j =0; for (int i = 0; i < pointsBuffer.capacity() - 3; i = i + 3) { points3D[j]= new Vector3( pointsBuffer.get(i), pointsBuffer.get(i+1), pointsBuffer.get(i+2)); //Log.v("Points3d", "J: "+ j + " X: " +points3D[j].x + "\tY: "+ points3D[j].y +"\tZ: "+ points3D[j].z ); j++; } //Get the projection of the points in the screen. Vector3[] points2D = new Vector3[points3D.length]; for(int i =0; i < points3D.length-1;i++) { Log.v("Points", "X: " +points3D[i].x + "\tY: "+ points3D[i].y +"\tZ: "+ points3D[i].z ); points2D[i] = points3D[i].multiply(projMatrix); Log.v("Points", "pX: " +points2D[i].x + "\tpY: "+ points2D[i].y +"\tpZ: "+ points2D[i].z ); }
The example I'm using is the point cloud java which can be found here https://github.com/googlesamples/tango-examples-java
UPDATE
TangoCameraIntrinsics ccIntrinsics = mTango.getCameraIntrinsics(TangoCameraIntrinsics.TANGO_CAMERA_COLOR); double fx = ccIntrinsics.fx; double fy = ccIntrinsics.fy; double cx = ccIntrinsics.cx; double cy = ccIntrinsics.cy; double[][] projMatrix = new double[][] { {fx, 0 , -cx}, {0, fy, -cy}, {0, 0, 1} };
Then to compute the projected point I use
for(int i =0; i < points3D.length-1;i++) { double[][] point = new double[][] { {points3D[i].x}, {points3D[i].y}, {points3D[i].z} }; double [][] point2d = CustomMatrix.multiplyByMatrix(projMatrix, point); points2D[i] = new Vector2(0,0); if(point2d[2][0]!=0) { Log.v("temp point", "pX: " +point2d[0][0]/point2d[2][0]+" pY: " +point2d[1][0]/point2d[2][0] ); points2D[i] = new Vector2(point2d[0][0]/point2d[2][0],point2d[1][0]/point2d[2][0]); } }
But I think that the results are still not what is expected, I for instance get results like:
pX: -175.58042313027244 pY: -92.573740812066
Which to me looks not right.
UPDATE Using color camera as suggested gives better results, but poitns are still negative -1127.8086915171814 pY: -652.5887102192332
Would it be ok to just multiply them by -1?
You have to multiply 3D point with RGB camera's intrinsics matrix to obtain pixel coordinate. 3D points are in Depthcamera's frame. You get pixel coordinates by following method:
and
x and y are pixel coordinates. And K is constructed with parameters using intrinsics function
来源:https://stackoverflow.com/questions/34565930/projecting-tango-3d-point-to-screen-google-project-tango