问题
I am trying to find fundamental matrix between 2 images and then transform them using RANSAC. I first use SIFT to detect keypoints and then apply RANSAC:
img1 = cv2.imread("im0.png", 0) # queryImage
img2 = cv2.imread("im1.png", 0) # trainImage
# Initiate SIFT detector
sift = sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
src = np.float32([points.pt for points in kp1]).reshape(-1, 1, 2)
dst = np.float32([points.pt for points in kp2]).reshape(-1, 1, 2)
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
However, when I try to execute this, I just get an error shown below:
Traceback (most recent call last):
File "sift2.py", line 29, in <module>
M, mask = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
cv2.error: /tmp/opencv3-20161119-29160-138ov36/modules/calib3d/src/fundam.cpp:349: error: (-215) src.checkVector(2) == dst.checkVector(2) in function findHomography
Whereas when I follow the tutorial in this link: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html , the code runs without errors.
Any help is appreciated!
回答1:
You're not paying attention or are not understanding what they're explaining. I recommend you read the full tutorials. You never matched the keypoints found by SIFT in one image to keypoints found by SIFT in the second image.
import cv2
import numpy as np
#import matplotlib.pyplot as plt
#explicit is better than implicit cv2.IMREAD_GRAYSCALE is better than 0
img1 = cv2.imread("img0.png", cv2.IMREAD_GRAYSCALE) # queryImage
img2 = cv2.imread("img1.png", cv2.IMREAD_GRAYSCALE) # trainImage
#CV doesn't hold hands, do the checks.
if (img1 is None) or (img2 is None):
raise IOError("No files {0} and {1} found".format("img0.png", "img1.png"))
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
Make sure you do the checks for SIFT just like you should for images. Especially if you plan on writing an app out of this because in older cv versions it's cv2.SIFT()
.
Now the crux of your problem is that SIFT just finds neat keypoints and their descriptors. They're not automagically compared to the second image. We need to do that ourselves. What SIFT really does is neatly explained in this tutorial. When in doubt draw! Or print. It's should be fairly obvious what SIFT actually gives out like this. Uncomment matplotlib
import at the start of the script.
tmp1 = cv2.drawKeypoints(img1, kp1)
tmp2 = cv2.drawKeypoints(img2, kp2)
plt.imshow(tmp1)
plt.show()
plt.imshow(tmp2)
plt.show()
This is the part that actually compares the images. It goes through the keypoints and compares the descriptors of the nearest k (2) neighbours based on some distance calculation. It's a bit vague on the technical details but neatly explained in this tutorial. This is the part you don't have in your example.
index_params = dict(algorithm = 0, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
matches = np.asarray(matches)
Now we can create the lists of points in both image and calculate the perspective transformation. Again, CV doesn't babysit, perspective can't be found without at least 4 points - make the checks and raise yourself nicer errors than CV does. Future self will be thankful. Matches are returned as a list of tuples [(a,aa), (b,bb)...]
and we only want the single letters (the keypoints), it's faster to cast the list to numpy array and use slicing than it is using for loops like in their examples (I'm guessing, when in doubt - test).
if len(matches[:,0]) >= 4:
src = np.float32([ kp1[m.queryIdx].pt for m in matches[:,0] ]).reshape(-1,1,2)
dst = np.float32([ kp2[m.trainIdx].pt for m in matches[:,0] ]).reshape(-1,1,2)
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
else:
raise AssertionError("Can't find enough keypoints.")
This works for me for their example images in OpenCv 2.4.9, I hope it's directly transferable to CV3 but I have no way of checking. For their example cereal image I get:
>>> H
array([[ 4.71257834e-01, -1.93882419e-01, 1.18225742e+02],
[ 2.47062711e-02, 3.79364095e-01, 1.60925457e+02],
[ -1.21517456e-04, -4.95488261e-04, 1.00000000e+00]])
which seems within reason.
The answer to the posted comment, can't fit in another comment.
They do the exact same thing like we do, they're just bit more explicit about it. To see what happens you need to pay close attention to the following lines in your posted link:
- lines 244-255 define the names of available algorithms
- lines 281-284 select the detector, descriptor and matcher algorithm
- lines 289-291 instantiate the detector, descriptor and matcher algorithm
- on line 315 first image keypoints are found
- on line 316 first image keypoints descriptors are calculated
- on line 328 second image keypoints are found
- on line 329 second image keypoints descriptors are calculated
- on line 330 first and second image keypoints and descriptors are matched
- then some perspective transform is done
We do exactly the same thing -> first we find keypoints and calculate their descriptors and then we match them. They find keypoints and then calculate descriptors (2 lines) but in python the ALG.detectAndCompute
method already returns the keypoints and descriptors so there's no need to have separate invocations like they do. Check it out, in the first loop iteration when i=0
which means that i*4+n = n
you have:
static const char* ddms[] =
{
"ORBX_BF", "ORB", "ORB", "BruteForce-Hamming",
// 0 1 2 3
//shortened for brevity
0
//4
....
Which means that
const char* name = ddms[i*4]; // --> ORBX_BF
const char* detector_name = ddms[i*4+1]; // --> ORB
const char* descriptor_name = ddms[i*4+2]; // --> ORB
const char* matcher_name = ddms[i*4+3]; // --> BruteForce_Hamming
..... // shortened
Ptr<FeatureDetector> detector = FeatureDetector::create(detector_name);
Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create(descriptor_name);
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(matcher_name);
which corresponds in python to
orb = cv2.ORB_create()
detector = orb.detect # the pythonic way would be to just call
descriptor = orb.compute # orb.detectAndCompute
matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
So they did all the same steps as I did, that is the example did, except they used the ORB detector and descriptor calculator and BruteForceMatcher with normed Hamming as the distance measure. See ORB tutorial and BFMatcher tutorial.
I just used the SIFT detector and descriptor calculator with FlannBasedMatcher, that is the only difference. All the other steps are the same.
来源:https://stackoverflow.com/questions/42538914/why-is-ransac-not-working-for-my-code