Matching similar images in OpenCV

前端 未结 3 1497
自闭症患者
自闭症患者 2021-01-15 23:43

I have two sets of images, {H} and {L}. {H} consists of 512x512 images. {L} consists of all of the images in {H}, but scaled down to 32x32-128x128 and with compression artif

相关标签:
3条回答
  • 2021-01-15 23:47

    I have 15 years of experience doing image processing, and usually when I have to precisely align two almost same (but maybe slightly different) image layers in Photoshop, I set the top layer’s blending mode to be Exclusion (ie. XOR). This makes the image all black, when all the pixels are precisely the same values.

    You could do something similar with OpenCV.

    Make sure you downsample the larger image to match the dimensions of the smaller image with the same interpolation as the thumbnail has been scaled down with. This can be Nearest Neighbor ie. pick every nth pixel in a grid (rounded to whole pixels or pixel boundaries) or Bicubic (one pixel is calculated form interpolated between a 4*n pixels) type. Nearest Neighbour is obviously faster...

    Then make a histogram and calculate some statistics or even an FFT of the difference.

    0 讨论(0)
  • 2021-01-15 23:50

    Well its an open issue to some extent. If you need to match the images based of the fact there will be no affine, perspective transformations, rotational transformations, then what you do is simple make a consistent scale for both sets and perform a one to one match by doing, say a correlation match. If you know a thing or two on image processing or computer vision, you could try using advance things like SURF,SIFT, or Gist etc to match the images. Really depends on what you is your need. And this would become a more difficult task by the way.

    0 讨论(0)
  • 2021-01-16 00:04

    Another, although maybe much slower approach is to do Clustering by Compression (Arxviv.org, PDF) and maybe use the JPEG coefficients as the model data to be compared instead of the uncompressed image data compressed by some other method of compression. Also see articles related to the first paper from Google Scholar.

    Clustering by compression basically means compressing a file X using the (statistical) model from file Y and compare to the size to just compressing X with it’s own model’s data.

    Here is some background about the idea of using different statistical models for compression. JPEG compression uses Huffman coding or Arithmetic coding to compress the DC coefficient tables.

    Yet another option, which may be much faster if the smaller images are not just downsampled and/or cropped versions, is to use the SIFT or SURF algorithms as suggested by Wajih.

    0 讨论(0)
提交回复
热议问题