point-clouds

kinect/ processing / simple openni - point cloud data not being output properly

空扰寡人 提交于 2019-12-30 05:29:13
问题 I've created a processing sketch which saves each frame of point cloud data from the kinect to a text file, where each line of the file is a point (or vertex) that the kinect has registered. I plan to pull the data into a 3d program to visualize the animation in 3d space and apply various effects. The problem is, when I do this, the first frame seems proper, and the rest of the frames seem to be spitting out what looks like the first image, plus a bunch of random noise. This is my code, in

Collison Detection between two point clouds using PCL

时光毁灭记忆、已成空白 提交于 2019-12-25 13:09:41
问题 Given two point clouds such that one point cloud is static whereas other is mobile obstacle. We want to move the mobile point cloud obstacle in space and note down whether it is intersecting with the static point cloud at that position. Is there a function available in PCL to do this automatically or do we have to write our own function to do the same? 回答1: The fcl (Flexible Collision Library) library can do fast collision detection. Here are the supported different object shapes: sphere box

project tango / arcore area mapping

北战南征 提交于 2019-12-25 03:11:09
问题 As we know, ARCore has all but replaced project tango but I have some research projects in mind that involve area mapping, therefore I have few questions regarding tango and ARCore. For Area mapping, tango produces more precise and denser point-cloud information than ARCore, therefore if I want to "area-map", a Tango device would be better for me. Is this right? The SDK for ARCore and Tango are the same thing and therefore support for its methods, documentation etc. are still effectively

Drawing a cube for each vertex

ぃ、小莉子 提交于 2019-12-25 00:48:29
问题 I have a list of 3D vertices which I can easily render as a pointcloud by passing the whole list to my vertex shader, setting gl_Position = pos , then setting FragColor = vec4(1.0, 1.0, 1.0, 1.0) and use GL_POINTS in the drawing function. I would now like to render an actual cube at that vertex position, with the vertex being the center of the cube and some given width. How can I achieve this in the most easy and performant way? Looping through all vertices, loading a cube into a buffer and

Web viewer for point cloud (PLY) file without faces

淺唱寂寞╮ 提交于 2019-12-24 04:06:07
问题 I am trying Three.Js for web viewing of PLY files using this example as reference. My PLY file is just Point Cloud with only vertices and NO faces. It seems that ThreeJs needs faces as well for creating geometry for rendering. What is the alternate to ThreeJS or how do I display these files online? --UPDATE-- Based on this SO answer, I converted the PLY file into a JSON format which looks like var data = [ "-4.3529 -5.92232 21.9669", // x, y, z "108 99 74", // r, g, b "-4.25362 -5.98312 22

generate a point cloud from a given depth image-matlab Computer Vision System Toolbox

我只是一个虾纸丫 提交于 2019-12-22 12:18:46
问题 I am a beginner in matlab, I have purchased Computer Vision System Toolbox. I have being given 400 of depth images (.PNG images). I would like to create a point cloud for each image. I looked at the documentation of Computer Vision System Toolbox, and there is an example of converting depth image to point cloud (http://uk.mathworks.com/help/vision/ref/depthtopointcloud.html): [xyzPoints,flippedDepthImage] = depthToPointCloud(depthImage,depthDevice) depthDevice = imaq.VideoDevice('kinect',2)

How to mark NULL data in Point Cloud Library (PCL) when using Iterative Closest Point (ICP)

人盡茶涼 提交于 2019-12-22 10:34:40
问题 I´m trying to align 2 sets of point clouds using the Iterative Closest Point (ICP) algorithm integrated within Point Cloud Library (PCL). I´m getting an error report saying that it cant find enough correspondence points. I have already relaxed the conditions for the parameters: setEuclideanFitnessEpsilon(-1.797e+5), setMaximumIterations(40) and setRANSACIterations(2000) and still having the same problem.. (I havent found much info about which or how these conditional values should be for a

How to generate a 3D point cloud from depth image and color image acquired from Matlab

不羁的心 提交于 2019-12-21 17:57:36
问题 I have 2 set data acquired from kinect 1- depth image with size 480*640 (uint16) from a scene 2- color image with same size (480*640*3 single) from same scene The question is how can I merge these data together to generate a colored 3D point clouds with PLY format in Matlab. I need to say that unfortunately I don't have an access to kinect anymore and i should use only these data. 回答1: I've never tried to do that in matlab, but i think that this is what you are looking for: http://es

How to display a point cloud with openGL

…衆ロ難τιáo~ 提交于 2019-12-20 14:08:44
问题 I'm trying to display a point cloud with OpenGL . I have followed some tutorials and I managed to display some geometric schema but when i try to display a point cloud read from a csv file, it does not work. The reading of file work very well. the includes #include <iostream> #include <fstream> #include <string> #include <cstdlib> #include <stdlib.h> #include <stdio.h> #include <boost/tokenizer.hpp> #include <sstream> #include <vector> #include <SDL/SDL.h> #include <GL/gl.h> #include <GL/glu

Point Cloud Library, robust registration of two point clouds

◇◆丶佛笑我妖孽 提交于 2019-12-20 08:12:27
问题 I need to find the transformation and rotation difference between two 3d point clouds. For this I am looking at PCL, as it seems ideal. On clean test data I have Iterative closest point working, but giving strange results(although I may have implemented it incorrectly...) I have pcl::estimateRigidTransformation working, and it seems better, although I assume will deal worse with noisy data. My question is: The two clouds will be noisy, and although they should contain the same points, there