Compare images to find differences

前端 未结 6 1821
一生所求
一生所求 2021-01-05 01:24

Task: I have a camera mounted on the end of our assembly line, which captures images of produced items. Let\'s for example say, that we produce tickets (with some text and p

相关标签:
6条回答
  • 2021-01-05 02:07

    I'd recommend looking at AForge Imaging library as it has a lot of really useful functions in it for this type of work.

    There are several methods you could use:

    1. Simple subtraction (template image - current) and see how many pixels are different. You'd probably want to threshold the results, i.e. only include pixels that are different by 10 or more (for instance).
    2. If the tickets can move about in the field-of-view then item 1) isn't going to work unless you can locate the ticket first. If for instance the ticket is white on a black background you could do a threshold on the image and that would give you a good idea of where the ticket was.
    3. Another technique I've used it before is "Model Finding" or "Pattern Matching", but I only know of a commercial library Matrox Imaging Library (or MIL) that contains these functions as they aren't trivial.

    Also you need to make sure you know which parts of the ticket are more important. For instance I guess that a missing logo or watermark is a big problem. But some areas could have variable text, such as a serial number and so you'd expect them to be different. Basically you might need to treat some areas of the image differently from others.

    0 讨论(0)
  • 2021-01-05 02:07

    I'm not the expert in the field, but it sounds like you need something like this

    http://en.wikipedia.org/wiki/Template_matching

    And it apears OpenCV has support for template matching
    http://nashruddin.com/template-matching-in-opencv-with-example.html

    0 讨论(0)
  • 2021-01-05 02:07

    There are surely applications and libraries out there that already do what you are attempting to do, but I don't know offhand of any. Obviously, one could hash the two images and compare but that expects things to be identical and doesn't leave any leeway for light differences or things like that.

    Assuming that you had controlled for the objects in the images being oriented identically and positioned identically, one thing you could do is march through the pixels of each image, and get the HSV values of each like so:

    Color color1 = Image1.GetPixel(i,j);
    Color color2 = Image2.GetPIxel(i,j);
    float hue1 =    color1.GetHue();
    float sat1 =    color1.GetSaturation();
    float bright1 = color1.GetBrightness();
    float hue2 =    color2.GetHue();
    float sat2 =    color2.GetSaturation();
    float bright2 = color2.GetBrightness();
    

    and do some comparisons with those values. That would allow you to compare them, I think, with more reliability than using the RGB values, particularly since you want to include some tolerances in your comparison.


    Edit:

    Just for fun, I wrote a little sample app that used my idea above. Essentially it totaled up the number of pixels whose H, S and V values differed by some amount(I picked 0.1 as my value) and then dropped out of the comparison loops if the H, S, or V counters exceed 38400 or 2% of the pixels (0.02 * 1600 * 1200). In the worst case, it took about 2 seconds to compare two identical images. When I compared images where one had been altered enough to exceed that 2% value, it generally took a fraction of a second.

    Obviously, this would likely be too slow if there were lots of images being produced per second, but I thought it was interesting anyway.

    0 讨论(0)
  • 2021-01-05 02:08

    I don't know the details but I do know that in industrial situations where a high throughput is essential this is sometimes done using neural nets. They turn millions of bits (camera pixels) into 1 (good or bad). Maybe this will help you on your search.

    0 讨论(0)
  • 2021-01-05 02:11

    This guy here wrote a simple Java code for just the same problem. It won't be hard to convert it to C#, I guess. It's working just fine, also a newer and stronger version can be found in it.

    0 讨论(0)
  • 2021-01-05 02:19

    I don't Know much about OpenCV, but a bit on image processing.

    The way to go depends on the frequency in that new pictures are taken. A simplistic approach would be to calculate a difference picture of you 'good' template and the image of your actual product.

    If the images are 100% identical, your resulting image should be empty. If there are residual pixels, you could count these and take them as a measure of deviation from the norm.

    However, you will have to match the orientation (and probably the scale) of one of the images to align there borders, otherwise this approach will not work.

    If you have timng constraints, you might want to reduce the information in your images prior to processing them (using for example an edge detection and/or convert them to grayscale or even monochromatic bitmap if your product's features are significant enough)

    0 讨论(0)
提交回复
热议问题