Multiple camera image stitching

北城余情 提交于 2019-11-29 03:43:45

问题


I've been running a project of stitching images from multiple cameras, but I think I've got a bottleneck...I have some questions about this issue.

I wanna try to mount them on a vehicle in the future and that means the relative positions and orientations of cameras are FIXED.

Also, as I'm using multiple cameras and try to stitch images from them using HOMOGRAPHY, I'll put cameras as close as possible so that the errors(due to the fact that the foci of the cameras are not at the same position and it's impossible as cameras occupy certain space.) can be reduced.

Here's a short experiment video of mine. http://www.youtube.com/watch?v=JEQJZQq3RTY

The stitching result is very terrible as shown there... Even though the the scene captured by the cameras is static, the homography still keeps varying.

The following link is the code I've done so far and code1.png and code2.png are pictures that show part of my code in Stitching_refind.cpp.

https://docs.google.com/folder/d/0B2r9FmkcbNwAbHdtVEVkSW1SQW8/edit?pli=1

I've changed some contents in the code a few days ago such as to do the Step 2, 3 and 4(Please check the 2 png pictures mentioned above) JUST ONCE.


To sum up, my questions are:

1. Is it possible to find out overlapping regions before computing features? I don't want to compute the features on the whole images as it will result in more computational time and mismatches. I wonder if it's possible to JUST computer features in the overlapping region of 2 adjacent images?

2.What I can do to make the obtained homography more accurate? Some people spoke of CAMERA CALIBRATION and try some other matching method. I'm still new to Computer Vision... I tried to study some materials about Camera calibration but I still have no idea what it is for.

About 2 months ago I asked a similar question here: Having some difficulty in image stitching using OpenCV

,where one of the answerer Chris said:

It sounds like you are going about this sensibly, but if you have access to both of the cameras, and they will remain stationary with respect to each other, then calibrating offline, and simply applying the transformation online will make your application more efficient.

What does "calibrate offline" mean? and what does it help?

Thanks for any advice and help.


回答1:


As Chris written:

However, your points are not restricted to a specific plane as they are 
imaging a 3D scene. If you wanted to calibrate offline, you could image 
a chessboard with both cameras, and the detected corners could be used
in this function.

Calibrate offline means that you use some callibration pattern easy to detect. Then compute transformation matrix. After this calibration you apply this (previously computed) matrix to acquired images ,it should work for you.



来源:https://stackoverflow.com/questions/11495132/multiple-camera-image-stitching

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!