Thanks in large part to some great answers on stackoverflow (here, here, and here) I've been having some pretty good success in aligning images. There is one issue, though, as you can see below. As I stitch many images together, they get smaller and smaller.
My theory on why this is going on is that the camera wasn't exactly perpendicular to the ground, so as I added more and more images the natural perspective in having a camera not perpendicular to the ground caused the far images to become smaller. This could very well be completely incorrect, though.
However, even when I transform the first image so that it's "as if" it was taken perpendicular to the ground (I think) the distortion still occurs.
Does the brilliant stackoverflow community have any ideas on how I can remedy the situation?
This is the process I use to stitch the images:
- Using knowledge of the corner lat/long points of images, warp such that the first image is perpendicular to ground. The homography I use to do this is the "base" homography
- Find common features between each image and the last one using
goodFeaturesToTrack()
andcalcOpticalFlowPyrLK()
- Use
findHomography()
to find the homography between the two images. Then, compose that homography with all the previous homographies to to get the "net" homography - Apply the transformation and overlay the image with the net result of what I've done so far.
There is one major constraint
The mosaic must be constructed one image at a time, as the camera moves. I am trying to create a real-time map as a drone is flying, fitting each image with the last, one by one.
My theory on why this is going on is that the camera wasn't exactly perpendicular to the ground.
This is a good intuition. If the camera is angled, then as it moves towards an object, that object becomes larger in the frame. So if you're stitching that to the previous frame, the current frame needs to shrink to fit to the object in the previous frame.
Full 3x3
homographies include distortions in the x
and y
directions, but 2x3
affine transformations do not. To stick with your current pipeline, you can try finding an affine or Euclidean (rigid) transformation instead. The difference between them is an affine warp allows for shearing and stretching separately in the x
and y
directions, Euclidean transforms only do translation, rotation, and uniform scaling. Both preserve parallel lines, whereas a full homography does not, so you could end up with a square image becoming more trapezoidal, and repeating that will shrink your image. An affine warp can still shrink in just one direction, turning a square into a rectangle so it still might shrink. Euclidean transformations can only scale the whole square, so it still might shrink.
Of course, they won't be as perfect matches as findHomography
either, but they should be able to get you to close matches without distorting the size as much. There are two options to find Euclidean or affine transformations with OpenCV:
estimateRigidTransform()
instead ofwarpPerspective()
to get either a rigid warp with the parameterfullAffine=False
or an affine warp withfullAffine=True
.findTransformECC()
with optional parametermotionType=cv2.MOTION_EUCLIDEAN
ormotionType=cv2.MOTION_AFFINE
(but affine is the default so it's not necessary to specify).
You can check out the difference between the algorithms on their documentation pages, or try both to see what works best for you.
If this doesn't work out as well, you can try estimating the homography which warps a frame to be completely perpendicular to the ground. If you do that, you can try applying it to all frames first, and then matching the images. Otherwise, you'll probably want to move to more advanced methods than finding just an homography between each frame.
来源:https://stackoverflow.com/questions/45315541/opencv-python-eliminating-eventual-narrowing-when-stitching-images