matlab-cvst

Compute fundamental matrix without point correspondences?

戏子无情 提交于 2019-12-06 02:26:25
问题 I would like to verify that my understanding of the fundamental matrix is correct and if it's possible to compute F without using any corresponding point pairs. The fundamental matrix is calculated as F = inv(transpose(Mr))*R*S*inv(Ml) where Mr and Ml are the right and left intrinsic camera matrices, R is the rotation matrix that brings the right coordinate system to the left one, and S is the skew symmetric matrix S = 0 -T[3] T[2] where T is the translation vector of the right coordinate

Correct lens distortion using single calibration image in Matlab

让人想犯罪 __ 提交于 2019-12-06 02:08:32
I would like to correct lens distortions on a series of images. All the images were captured with the camera fixed in place, and a checkerboard image from the same set up is also available. After detecting the corners of the distorted checkerboard image, I would like to compute the radial distortion coefficients so that I can correct the images. Similar to the estimateCameraParameters function. Ideally, I would like to use a method similar to Matlab camera calibration however this does not seem to work for cases where only a single calibration image is available (and the images were all

Identifying a skin disease using image processing

痴心易碎 提交于 2019-12-05 21:42:51
I currently have 2 seperate data sets that belong to 2 different skin diseases. I have drawn an abstract image differentiating the 2 diseases on MS Paint. Disease 1 tends to be rounder in shape than Disease 2 and there is a texture difference as well. Using texture filters and segmentation functions on Matlab, I am able to locate the disease region (and draw a border around it), for both Disease 1 and 2 . My question is how can I differentiate between the 2 diseases? Are there functions I can use or am I better off using some form of machine learning on the data sets. Any advice at all is

CV: Difference between MATLAB and OpenCV camera calibration techniques

笑着哭i 提交于 2019-12-05 21:31:53
I calibrated a camera with checkerboard pattern using OpenCV and MATLAB. I got .489 and . 187 for Mean Re-projection errors in OpenCV and MATLAB respectively. From the looks of it, MATLAB is more precise. But my adviser feels both MATLAB and OpenCV use the same BOUGET's algorithm and should report same error (or close). Is it so ? Can someone explain the difference b/w MATLAB and OpenCV camera calibration methods ? Thanks! Your adviser is correct in that both MATLAB and OpenCV use essentially the same calibration algorithm. However, MATLAB uses the Levenberg-Marquardt non-linear least squares

silhouette extraction from depth

白昼怎懂夜的黑 提交于 2019-12-05 19:34:01
Hello I have a depth image, I want to extract the person(human) silhouette from that. I used pixel thresholding like this: for i=1:240 for j=1:320 if b(i,j)>2400 || b(i,j)<1900 c(i,j)=5000; else c(i,j)=b(i,j); end end end but there is some part left. Is there any way to remove that? Original_image: Extracted_silhouette: Shai According to this thread depth map boundaries can be found based on the direction of estimated surface normals. To estimate the direction of the surface normals, you can [dzx dzy] = gradient( depth_map ); %// horizontal and vertical derivatives of depth map n = cat( 3, dzx

Calibration of images to obtain a top-view for points that lie on a same plane

帅比萌擦擦* 提交于 2019-12-04 19:21:53
问题 Calibration: I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the cameraParams which contains: Camera Extrinsics RotationMatrices: [3x3x18 double] TranslationVectors: [18x3 double] and Camera Intrinsics IntrinsicMatrix: [3x3 double] FocalLength: [1.0446e+03 1.0428e+03] PrincipalPoint: [604.1474 359.7477] Skew: 3.5436 Aim: I have recorded trajectories of some objects in motion using this camera. Each object

camera calibration MATLAB toolbox

若如初见. 提交于 2019-12-04 11:53:31
问题 I have to perform re-projection of my 3D points (I already have data from Bundler). I am using Camera Calibration toolbox in MATLAB to get the intrinsic camera parameters. I got output like this from 27 images (chess board; images are taken from different angles). Calibration results after optimization (with uncertainties): Focal Length: fc = [ 2104.11696 2101.75357 ] ± [ 23.13283 22.92478 ] Principal point: cc = [ 969.15779 771.30555 ] ± [ 21.98972 15.25166 ] Skew: alpha_c = [ 0.00000 ] ± [

How can I undistort an image in Matlab using the known camera parameters?

这一生的挚爱 提交于 2019-12-04 09:25:41
问题 This is easy to do in OpenCV however I would like a native Matlab implementation that is fairly efficient and can be easily changed. The method should be able to take the camera parameters as specified in the above link. 回答1: You can now do that as of release R2013B, using the Computer Vision System Toolbox. There is a GUI app called Camera Calibrator and a function undistortImage. 回答2: The simplest and most common way of doing undistort (also called unwarp or compensating for lens distortion

Bag of features and Neural Networks in Matlab

会有一股神秘感。 提交于 2019-12-04 09:06:36
I've been trying to implement a neural network in Matlab that is capable of recognizing images based on their features. I am attempting to use the Bag of features/words approach to obtain a discrete vector of features that I can then feed into my neural network. I have been using this example as a guide - http://in.mathworks.com/help/vision/examples/image-category-classification-using-bag-of-features.html One line in the code (featureVector = encode(bag, img);) counts the word occurrences in an image. Could I use this "featurevector" matrix to train my neural network? And would I have to

Creating stereoParameters class in Matlab: what coordinate system should be used for relative camera rotation parameter?

北战南征 提交于 2019-12-04 06:04:37
stereoParameters takes two extrinsic parameters: RotationOfCamera2 and TranslationOfCamera2 . The problem is that the documentation is a not very detailed about what RotationOfCamera2 really means, it only says: Rotation of camera 2 relative to camera 1, specified as a 3-by-3 matrix. What is the coordinate system in this case ? A rotation matrix can be specified in any coordinate system. What does it exactly mean "the coordinate system of Camera 1" ? What are its x,y,z axes ? In other words, if I calculate the Essential Matrix, how can I get the corresponding RotationOfCamera2 and