Efficient way for SIFT descriptor matching

前端 未结 3 1287
甜味超标
甜味超标 2021-02-09 14:18

There are 2 images A and B. I extract the keypoints (a[i] and b[i]) from them.
I wonder how can I determine the matching between a[i] and b[j], efficiently?

The ob

3条回答
  •  忘掉有多难
    2021-02-09 14:45

    The question is weather you actually want to determine a keypoint matching between two images, or calculate a similarity measure.

    If you want to determine a matching, then I'm afraid you will have to brute-force search through all possible descriptor pairs between two images (there is some more advanced methods such as FLANN - Fast Approximate Nearest Neighbor Search, but the speedup is not significant if you have less then or around 2000 keypoints per image -- at least in my experience). To get a more accurate matching (not faster, just better matches), I can suggest you take look at:

    • D.G. Lowe. Distinctive image features from scale-invariant keypoints -- the comparison with the second closest match
    • J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos -- the section about Spatial consistency

    If, on the other hand, you want only a similarity measure over a large database, then the appropriate place to start would be:

    • D. Nistér and H. Stewénius. Scalable recognition with a vocabulary tree -- where they use a hierarchical approach based on a structure called vocabulary tree to be able to calculate a similarity measure between a query image and an image from a large database
    • J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos -- the same paper as above, but it's very helpful to understan the approach in Nistér, Stewénius

提交回复
热议问题