I want to create a disparity image using two images from low resolution usb cameras. I am using OpenCV 4.0.0. The frames I use are taken from a video. The results I am currently getting are very bad (see below).
Both cameras were calibrated and the calibration data used to undistort the images. Is it because of the low resolution of the left image and right image?
Left:
Right:
To have a better guess there also is an overlay of both images.
Overlay:
The values for the cv2.StereoSGBM_create()
function are based on the ones of the example code that comes with OpenCV (located in OpenCV/samples/python/stereo_match.py
).
I would be really thankful for any help or suggestions.
Here is my code:
# convert both image to grayscale
left = cv2.cvtColor(left, cv2.COLOR_BGR2GRAY)
right = cv2.cvtColor(right, cv2.COLOR_BGR2GRAY)
# set the disparity matcher
window_size = 3
min_disp = 16
num_disp = 112-min_disp
stereo = cv2.StereoSGBM_create(minDisparity = min_disp,
numDisparities = num_disp,
blockSize = 16,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2,
disp12MaxDiff = 1,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32
)
# compute disparity
dis = stereo.compute(left, right).astype(np.float32) / 16.0
# display the computed disparity image
matploitlib.pyplot.imshow(dis, 'gray')
matploitlib.pyplot.show()
Most stereo algorithms require the input images to be rectified. Rectification transforms images so that corresponding epipolar lines are corresponding horizontal lines in both images. For rectification, you need to know both intrinsic and extrinsic parameters of your cameras.
OpenCV has all the tools required to perform both calibration and rectification. If you need to perform calibration, you need to have a calibration pattern (chessboard) available as well.
In short:
- Compute intrinsic camera parameters using
calibrateCamera()
. - Use the intrinsic parameters with
stereoCalibrate()
to perform extrinsic calibration of the stereo pair. - Using the paramters from
stereoCalibrate()
, compute rectification parameters withstereoRectify()
- Using rectification parameters, calculate maps used for rectification and undistortion with
initUndistortRectifyMap()
Now your cameras are calibrated and you can perform rectification and undistortion using remap()
for images taken with the camera pair (as long as the cameras do not move relatively to each other). The rectified images calculated by remap()
can now be used to calculate disparity images.
Additionally, I recommend checking out some relevant text book on the topic. Learning OpenCV: Computer Vision with the OpenCV Library has a very practical description of the process.
I agree with @Catree's comment and @sebasth's answer, mainly because your images are not rectified at all.
However, another issue may occur and I would like to warn you about this. I tried to leave a comment on @sebasth's answer, but I can't comment yet...
As you said you are using low resolution usb cameras, it makes me believe these cameras have the light exposure made by Rolling Shutter lenses. For scenes in movement and in constant change, the ideal are Global Shutter cameras. This is especially relevant if you intend to use this for scenes in movement.
(Example of Rolling Shutter effect: enter link description here).
So with the Rolling Shutter lenses you will also have to be careful about cameras synchronization.
It can work with Rolling shutter cameras, but you will need to take care with lens synchronization, preferably in a controlled environment (even with little change in lighting).
Also remember to turn off the automatic camera parameters, like: "White Balance" and especially the "Exposure".
Best regards!
来源:https://stackoverflow.com/questions/58150354/image-processing-bad-quality-of-disparity-image-with-opencv