问题
I have followed OpenCV Feature Detection and Description tutorial and used SIFT and other algorithms in OpenCV to find matching feature points between 2 images. From what i understood, these algorithms can find the similar regions between 2 images. But i am interested in identifying the different or dis-similar regions.
How can i draw all the NON-MATCHING feature points on both the images? Further more, can i draw boundaries around these non-matching points to be able to show which regions in the 2 images are different?
I am using Python code on Windows 7 and build from latest OpenCV source.
回答1:
- Draw all the NON-MATCHING feature points on both the images:
This task is pretty straight forward once you know the structure of the Matcher objects resulting from the match of two descriptors (matches = bf.match(des1,des2)
). The two Matcher objects' properties relevant to this problem are the following:
- DMatch.trainIdx: Index of the descriptor (or keypoint from the train image) in train descriptors
- DMatch.queryIdx: Index of the descriptor (or keypoint from the query image) in query descriptors
Then, knowing this information and as @uzair_syed said, this is just a simple list operations task.
- Draw boundaries around the non-matching points:
To achieve this, I would do something like this:
- Create a black mask with a white pixel for each non-matching points
- Depending on the density of the non-matching point's cluster, dilate the mask with a big kernel (i.e. 15 x 15 px).
- Erode the mask with the same kernel's size.
- Finally, apply the
findContours
function on the mask to get the boundaries of the non-matching points.
For more information, you can have a look at this question and its answer.
Hope it gets you on the right track!
回答2:
It turned out to be simple list operations task. Here is my Python code
# code copied from
# http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html
import numpy as np
import cv2
from matplotlib import pyplot as plt
from scipy.spatial.distance import euclidean
MIN_MATCH_COUNT = 10
img1 = cv2.imread('Src.png',0) # queryImage
img2 = cv2.imread('Dest.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
kp1_matched=([ kp1[m.queryIdx] for m in good ])
kp2_matched=([ kp2[m.trainIdx] for m in good ])
kp1_miss_matched=[kp for kp in kp1 if kp not in kp1_matched]
kp2_miss_matched=[kp for kp in kp2 if kp not in kp2_matched]
# draw only miss matched or not matched keypoints location
img1_miss_matched_kp = cv2.drawKeypoints(img1,kp1_miss_matched, None,color=(255,0,0), flags=0)
plt.imshow(img1_miss_matched_kp),plt.show()
img2_miss_matched_kp = cv2.drawKeypoints(img2,kp2_miss_matched, None,color=(255,0,0), flags=0)
plt.imshow(img2_miss_matched_kp),plt.show()
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None
来源:https://stackoverflow.com/questions/43775842/opencv-draw-non-matching-points