Aligning captured depth and rgb images

后端 未结 2 788
耶瑟儿~
耶瑟儿~ 2021-01-07 10:47

There has been previous questions (here, here and here) related to my question, however my question has a different aspect to it, which I have not seen in any of the previou

2条回答
  •  被撕碎了的回忆
    2021-01-07 11:43

    In general what you are trying to do from a pair of RGB and Depth images is non-trivial and ill-defined. As humans we recognise the arm in the RGB image, and are able to relate it to the area of the depth image closer to the camera. However, a computer has no prior knowledge about which parts of the RGB image it expects to correspond to which parts of the depth image.

    The reason most algorithms for such alignment use camera calibration is that this process allows this ill-posed problem to become well-posed.

    However, there may still be ways to find the correspondences, particularly if you have lots of image pairs from the same Kinect. You then need only search for one set of transformation parameters. I don't know of any existing algorithms to do this, but as you note in your question you may find something like doing edge detection on both images and trying to align the edge images a good place to start.

    Finally, note that when objects get close to the Kinect the correspondence between RGB and depth images can become poor, even after the images have been calibrated. You can see some of this effect in your images - the 'shadow' that the hand makes in your example depth image is somewhat indicative of this.

提交回复
热议问题