unbutu rgbd slam v2+kinect2/kinect1

我们两清 提交于 2020-01-18 08:03:11

第一步 安装和使用kinect v2摄像头

ibfreenect2安装

  1. 安装opencv
  2. 安装其他依赖:
$ sudo apt-get install build-essential cmake pkg-config libturbojpeg libjpeg-turbo8-dev mesa-common-dev freeglut3-dev libxrandr-dev libxi-dev
  1. 下载libfreenect2驱动:
$ cd ~
$ git clone https://github.com/OpenKinect/libfreenect2.git
$ cd libfreenect2
$ cd depends; ./download_debs_trusty.sh
  1. 安装libusb
$ sudo dpkg -i debs/libusb*deb
  1. 安装 TurboJPEG
$ sudo apt-get install libturbojpeg libjpeg-turbo8-dev
  1. 安装GLFW3
$ sudo apt-get install libglfw3-dev
  1. 安装OpenGL的支持库
    如果最后的命令有冲突,忽略就行
$ sudo dpkg -i debs/libglfw3*deb; sudo apt-get install -f; sudo apt-get install libgl1-mesa-dri-lts-vivid

  1. 安装OpenCL的支持库
$ sudo apt-add-repository ppa:floe/beignet; sudo apt-get update; sudo apt-get install beignet-dev; sudo dpkg -i debs/ocl-icd*deb
  1. 编译libfreenect2
$ cd ~/libfreenect2
$ mkdir build && cd build
$ cmake .. 
$ make
$ make install
  1. 如果想指定安装目录:
$ cmake ..  -DCMAKE_INSTALL_PREFIX=$HOME/freenect2
  1. 建立别名
$ cd ~/libfreenect2
$ sudo cp platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/
  1. 重插kinect2,即可识别。
  2. 运行kinect2
    $ cd ~/libfreenect2/build
    $ ./bin/Protonect

iai_kinect2安装

cd ~/catkin_ws/src/
git clone https://github.com/code-iai/iai_kinect2.git
cd iai_kinect2
rosdep install -r --from-paths .
cd ~/catkin_ws
catkin_make -DCMAKE_BUILD_TYPE="Release"
sudo apt-get install build-essential cmake pkg-config libturbojpeg libjpeg-turbo8-dev mesa-common-dev freeglut3-dev libxrandr-dev libxi-dev

启动使用

$ roscore
$ roslaunch kinect2_bridge kinect2_bridge.launch
$ rosrun kinect2_viewer save_seq hd cloud
或者
$ rosrun kinect2_viewer kinect2_viewer

第三步 用Kinect2跑RGB-D SLAM

通过以上两步,我们安装了Kinect2的驱动和RGB-D SLAM。
新建rgbdslam_kinect2.launch:

<launch>
<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
<!-- Input data settings-->
<param name="config/topic_image_mono"              value="/kinect2/qhd/image_color_rect"/>  
<param name="config/camera_info_topic"             value="/kinect2/qhd/camera_info"/>

<param name="config/topic_image_depth"             value="/kinect2/qhd/image_depth_rect"/>

<param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="SIFTGPU"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="SIFTGPU"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->

<param name="config/optimizer_skip_step"           value="15"/><!-- optimize only every n-th frame -->
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/backend_solver"                value="csparse"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->

<param name="config/pose_relative_to"              value="first"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->

<param name="config/maximum_depth"           value="2"/>
<param name="config/subscriber_queue_size"         value="20"/>

<param name="config/min_sampled_candidates"        value="30"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="20"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="20"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="140"/>

<param name="config/g2o_transformation_refinement"           value="1"/>
<param name="config/icp_method"           value="gicp"/>  <!-- icp, gicp ... -->

<!--
<param name="config/max_rotation_degree"           value="20"/>
<param name="config/max_translation_meter"           value="0.5"/>

<param name="config/min_matches"           value="30"/>   

<param name="config/min_translation_meter"           value="0.05"/>
<param name="config/min_rotation_degree"           value="3"/>
<param name="config/g2o_transformation_refinement"           value="2"/>
<param name="config/min_rotation_degree"           value="10"/>

<param name="config/matcher_type"         value="SIFTGPU"/>
 -->
</node>
</launch>

打开终端,执行:

roslaunch kinect2_bridge kinect2_bridge.launch
roslaunch rgbdslam rgbdslam_kinect2.launch

保存点云和轨迹

作者精心设计了UI,我们可以通过菜单栏的“Save”保存点云图和轨迹等数据。
可通过以下命令显示点云:

pcl_viewer path-to-pcd

绘制轨迹可参考:https://blog.csdn.net/Felaim/article/details/80830479

保存数据集

roslaunch kinect2_bridge kinect2_bridge.launch
rosrun image_view image_view image:=/kinect2/qhd/image_color_rect
rosbag record -O kinect_file /tf /kinect2/qhd/image_color_rect /kinect2/qhd/camera_info /kinect2/qhd/image_depth_rect
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!