问题
So I have a 2-dimensional array representing a coordinate plane, an image. On that image, I am looking for "red" pixels and finding (hopefully) the location of a red LED target based on all of the red pixels found by my camera. Currently, I'm simply slapping my crosshairs onto the centroid of all of the red pixels:
// pseudo-code
for(cycle_through_pixels)
{
if( is_red(pixel[x][y]) )
{
vals++; // total number of red pixels
cx+=x; // sum the x's
cy+=y; // sum the y's
}
}
cx/=vals; // divide by total to get average x
cy/=vals; // divide by total to get average y
draw_crosshairs_at(pixel[cx][cy]); // found the centroid
The problem with this method is that while this algorithm naturally places the centroid closer to the largest blob (the area with the most red pixels), I am still seeing my crosshairs jump off the target when a bit of red flickers off to the side due to glare or other minor interferences.
My question is this:
How do I change this pattern to look for a more weighted centroid? Put simply, I want to make the larger blobs of red much more important than the smaller ones, possibly even ignoring far-out small blobs altogether.
回答1:
You could find the connected components in the image and only include those components that have a total size above a certain threshold in your centroid calcuation.
回答2:
I think the easiest (and maybe naïve) answer would be: instead of counting just the pixel value, count also the surrounding 8 pixels (in a total of 9). Now, each value took can be from 0 to 9, and includes greater values for blobs with the same color. Now, instead of vals++
you'll be incrementing the value by the number of pixels in the surrounding area too.
来源:https://stackoverflow.com/questions/7326892/weighted-centroid-of-an-array