I am trying to get a 3x4 camera matrix for triangulation process but calibrateCamera()
returns only 3x3 and 4x1 m
If you are using cameraCalibrate(), you must be getting mtx, rvecs and tvecs. R is 3x1 which you need to convert to 3x3 using Rodrigues method of opencv. So the final code will look something like:
rotation_mat = np.zeros(shape=(3, 3))
R = cv2.Rodrigues(rvecs[0], rotation_mat)[0]
P = np.column_stack((np.matmul(mtx,R), tvecs[0]))
assuming you have used multiple images for calibrating camera, here I am using only the first one to get P matrix for first image. For any other image you can use rvecs[IMAGE_NUMBER], tvecs[IMAGE_NUMBER] for the corresponding P matrix
calibrateCamera() returns you
a 3x3 matrix as cameraMatrix,
a 4x1 matrix as distCoeffs,
and rvecs and tvecs that are vectors of 3x1 rotation(R) and 3x1 transformation(t) matrices.
What you want is ProjectionMatrix, which is multiply [cameraMatrix] by [R|t].
Therefore, it returs you a 3x4 ProjectionMatrix.
You can read OpenCV documentation for more info.