Background subtracting in MATLAB

大城市里の小女人 提交于 2019-12-01 05:47:52

What kind of image do you work with?

Background subtraction is easy. If you want to subtract a constant value, or a background with the same size as your image, you simply write img = img - background. imsubtract simply makes sure that the output is zero wherever the background is larger than the image.

Background estimation is hard. There you need to know what kind of image you're looking at, otherwise, the background estimation will fail.

If you have, for example, spot or line features that are either all dark on bright or bright on dark background, you can pass through with a local maximum filter (imdilate) or a local minimum filter (imerode), respectively, that is larger than your features, so that wherever you place the filter mask, there are some pixels that cover background. Also, you want the filter to have somewhat similar shape as the features. In your case, if you lose part of your image, you may want to try and make the filter larger (but not too large).

Instead of subtracting maximum or minimum, subtracting the median can work well, though you have to choose the filter size such that there's usually a majority of background pixels inside the filter mask. Unfortunately, median filtering is rather slow.

To subtract the background image, you need a model of the background. The simplest model is an image captured as the background along with some allowable deviation (+/- 0-255). Then, background subtraction in MATLAB is pretty simple:

image( find(abs(image-background) <= threshold) ) = 0;

It becomes more difficult when you use a statistical model, but essentially subtracting the background is pretty easy. imsubtract is NOT background subtraction; it is a subtraction filter like you would find in photoshop. It does not care about background versus foreground, which then defeats the point.

Since background subtraction itself is pretty easy, the question becomes more about background estimation. This is a bit more complicated, and generally requires more frames and training to build statistical models of the background (for example, looking at pixels as Gaussian distributions or mixtures of Gaussians, or looking instead at optical flow to determine what's not moving).

If you have access to technical articles (via work or school), "Pfinder: Real-Time Tracking of the Human Body" by Wren and others gives a pretty simple approach. Or you can just search google for single Gaussian background subtraction. There are a number of methods implemented with OpenCV here --> http://dparks.wikidot.com/source-code <-- that you might find useful.

The Computer Vision System Toolbox has the vision.ForegroundDetector object, which implements a variant of Stauffer and Grimson's GMM background subtraction. The implementation is very fast, leveraging multiple cores. Check out this example of how to use background subtraction as a building block of a system for tracking multiple objects.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!