I am reading lot of post for object detection using feature extraction (sift ecc).
After having calculate descriptors on both images, to get good matches they are using
I haven't dealt with the exact module you're using, but from the documentation:
http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html
It seems that the second call is swapping the training with the query, and hence can give you a different set of results. This is also hinted at by the naming of matches12 changing to matches21 -- it's not two-way association, but a direction association.
You see this in training large datasets in machine learning where you're results training one subset of the data gives a better performance than training on a different subset of the data. I don't know how the two matches are used after the step you've written but I imagine it allows for either choice of best training, or for capturing better covariance information from the data.
Euclidean distance is the same in both directions, but the ordering of the double vector changes and hence the structure of association changes between the two calls (unless descriptors1 == descriptors2, which would form a symmetric matrix and then probably be useless to repeat).
Posting the uses of matches12 and matches21 later in the codebase would probably also clarify the exact reason for having both direction associations.
Hopefully this gives a starting place for understanding the repeated call.
* EDIT *
I looked over the source code you provided a link to and in answer to your question, the distance IS the same between two points, but the closest point of a specific point (point A) to another particular point (point B) is not necessarily an invert-able statement. Thus B may be the closest point to A, but A may not be the closest point to B.
As Frédéric's post shows graphically, it's not that the Euclidean distance changes between 2 points but rather that the point with the closest Euclidean distance can change when changing the perspective of the comparison. Thus cross-checking can validate point which share closest point relationships (A is closest to B and B is closest to A).
Your intuition that this is not the best approach (with large numbers of points) is correct though, as a more complicated approach could find all such relationships in O(N*ln(N)) time instead of O(N^2) time -- especially a heuristic driven search. But for a (relatively) small number of points the expected performance gain would be negligible and the implementation would be (much for good improvement) more complicated.
You can't generally assume that the Eucludian distance will be used by your matcher. For instance, the BFMatcher supports different norms : L1, L2, Hamming...
You can check the documentation here for more details : http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html
Anyway, all these distance measures are symmetric and it doesn't matter which one you use to answer your question.
And the answer is : calling knnMatch(A,B)
is not the same as calling knnMatch(B,A)
.
If you don't trust me, I'll try to give you a graphical and intuitive explanation. I assume for the sake of simplicity that knn==1
, so that for each queried descriptor, the algorithm will only find 1 correspondence (much easier to plot :-)
I randomly picked few 2D samples and created two data-sets (red & green). In the first plot, the greens are in the query data-set, meaning that for each green point, we try to find the closest red point (each arrow represents a correspondence).
In the second plot, the query & train data-sets has been swapped.
Finally, I also plotted the result of the crossCheckMatching()
function which only conserve the bi-directional matches.
And as you can see, the crossCheckMatching()
's output is much better than each single knnMatch(X,Y) / knnMatch(Y,X) since only the strongest correspondence have been kept.