I am trying to implement a very simple program for finding similarities between two images.
I am using the ORB feature detector and image descriptor for this task and I
As mentioned in other answers, there are several methods to remove outliers and bad matches. I guess you found samples and tutorials with match
instead of knnMatch
utilizing some of those methods.
So, as you may know the difference is that knnMatch
returns the n-best matches in descriptor2
for each descriptor in descriptor1
. Which means, instead of a list of matches you get a list of a list of matches. I guess this is the reason, why you had problems.
The main advantage using knnMatch
is that you can perform a ratio test. So if the distances from one descriptor in descriptor1
to the two best descriptors in descriptor2
are similar it suggests that there are repetitive patterns in your images (e.g. the tips of a picket fence in front of grass). Thus, such matches aren't reliable and should be removed.
(I am not sure why you search for the five best matches - you pass 5 to knnMatch
- for each descriptor. Rather search for two.)
If you now want to access the best match just for each descriptor, you just have to access the first element of the "sublists". In the following you'll find as a example a ratio test and a homography estimation using RANSAC (I replaced everything after your knnMatch
):
// ratio test
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
for (Iterator<MatOfDMatch> iterator = matches.iterator(); iterator.hasNext();) {
MatOfDMatch matOfDMatch = (MatOfDMatch) iterator.next();
if (matOfDMatch.toArray()[0].distance / matOfDMatch.toArray()[1].distance < 0.9) {
good_matches.add(matOfDMatch.toArray()[0]);
}
}
// get keypoint coordinates of good matches to find homography and remove outliers using ransac
List<Point> pts1 = new ArrayList<Point>();
List<Point> pts2 = new ArrayList<Point>();
for(int i = 0; i<good_matches.size(); i++){
pts1.add(keypoints1.toList().get(good_matches.get(i).queryIdx).pt);
pts2.add(keypoints2.toList().get(good_matches.get(i).trainIdx).pt);
}
// convertion of data types - there is maybe a more beautiful way
Mat outputMask = new Mat();
MatOfPoint2f pts1Mat = new MatOfPoint2f();
pts1Mat.fromList(pts1);
MatOfPoint2f pts2Mat = new MatOfPoint2f();
pts2Mat.fromList(pts2);
// Find homography - here just used to perform match filtering with RANSAC, but could be used to e.g. stitch images
// the smaller the allowed reprojection error (here 15), the more matches are filtered
Mat Homog = Calib3d.findHomography(pts1Mat, pts2Mat, Calib3d.RANSAC, 15, outputMask, 2000, 0.995);
// outputMask contains zeros and ones indicating which matches are filtered
LinkedList<DMatch> better_matches = new LinkedList<DMatch>();
for (int i = 0; i < good_matches.size(); i++) {
if (outputMask.get(i, 0)[0] != 0.0) {
better_matches.add(good_matches.get(i));
}
}
// DRAWING OUTPUT
Mat outputImg = new Mat();
// this will draw all matches, works fine
MatOfDMatch better_matches_mat = new MatOfDMatch();
better_matches_mat.fromList(better_matches);
Features2d.drawMatches(img1, keypoints1, img2, keypoints2, better_matches_mat, outputImg);
// save image
Imgcodecs.imwrite("result.jpg", outputImg);
I hope this is sufficient as a example. Other filtering methods could be applied analogously. Do not hesitate to ask, if you have further questions.
EDIT: The homography filtering is only valid if most of your keypoints are on the same plane in the scene, like a wall etc.