问题
I am doing image stitching in OpenCV, where I am taking pictures of a planar scene from different locations and try to compose a panorama. I have modified the stitching example to fit my needs. The problem with the openCV stitching pipeline is, that it assumes a pure rotation of the camera, which is not the case for me. When the pictures are taken perfectly orthogonal to the scene (no camera rotation, just translation), the result is quite good, but when there are both, camera rotation and translation the results are not satisfying.
I am able to compute the homographies between the camera positions, which can be done because the scene is planar, but I don't really know what the next step is. My idea is to undistort the image using the homography in such a way, what the the camera is facing the plane orthogonally and to apply the stitching next. The problem with this is, that I do not know the true locations of the feature points. How can I go about doing this? Is there anything else I could try to get better stitching results for a planar scene with an arbitrary camera movement?
回答1:
The opencv stitching code works by assuming translation to be zero. So if the translation is within a range, it will work fine. Else it will reject faraway images. If you want to use translated image set, you need to use decomposehomgraphymat which will give nonzero translation adn rotation. But for warping and blending opencv uses rotation of camera parameter only. So you need to devise a complete new method for non-rotational stitching, maybe you look into microsoft photosynth work. I am not sure, but it does work for only translated images.
来源:https://stackoverflow.com/questions/17790954/opencv-non-rotational-image-stitching