It has been long time ago that this question was published, however I think that nowadays the answer can be refreshed. Here are some tips.
According to the definition of image rectification which is a transformation process of two-or-more images into a common image plane. This can simplify the problem of finding matching points between images. Further it increases performances of many applications like a depth map extraction.
So to perform a rectification, you must know extrinsic camera parameters , intrinsic camera parameters and distortion parameters. First express the positions between cameras, others convert the pixel frame coordinates to camera frame coordinate, and last are responsible for remove radial distortion. Below you may find some tools to estimates the parameters:
- Matlab -stereo-camera-calibration
- MicMac
- OpenCV
- Kalibr
When you know these parameters finally you can do rectification between each camera separately:
- the rectification process on openCV, check stereoRectify.
- the same algorithm already implemented at ROS stereo pipeline.
- in Computer Vision System Toolbox Matlab and several another tutorials like: Du Huynh