background-subtraction

OpenCV C++/Obj-C: Proper object detection

对着背影说爱祢 提交于 2019-11-28 23:05:38
问题 As some kind of "holiday project" I'm playing around with OpenCV and want to detect and measure stuff. Current workflow (early stage - detection): Convert to grayscale (cv::cvtColor) Apply Adaptive threshold (cv::adaptiveThreshold) Apply canny edge detection (cv::Canny) Finding contours (cv::findContours) My outcome is kinda crappy and I'm not sure what's the right direction to go. I already got cvBlob working under my current setup (OSX 10.7.2, Xcode 4.2.1) is it a better way to go? If so,

OpenCV: how to use createBackgroundSubtractorMOG

人盡茶涼 提交于 2019-11-28 18:21:57
I was trying to go through this tutorial på OpenCV.org: http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html#background-subtraction The MOG pointer is initialized as Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor and in main, it is used in the following manner: pMOG = createBackgroundSubtractorMOG(); However, this yields the following error: Error: Identifier "createBackgroundSubtractorMOG" is undefined Also, when the background model is to be updated, the following command is used: pMOG->apply(frame, fgMaskMOG); Which in turn yields

Efficient Background subtraction with OpenCV

本小妞迷上赌 提交于 2019-11-28 04:01:35
I want to do background subtraction in a video file using OpenCV method. Right now I'm able to do background subtraction, but the problem is that I couldn't get the output in color mode. All the output after subtracting the background is coming in grayscale color mode :( . I want to get the color information to the foreground which is the resulting output after background got subtracted. Can I do it using masking technique?? like the following procedure which I'm thinking about. Capture Input -- InputFrame (RGB) Process InputFrame Subtract background, store foreground in TempFrame (which is

Otsu thresholding for depth image

℡╲_俬逩灬. 提交于 2019-11-27 16:08:44
问题 I am trying to substract background from depth images acquired with kinect. When I learned what otsu thresholding is I thought that it could with it. Converting the depth image to grayscale i can hopefully apply otsu threshold to binarize the image. However I implemented (tried to implemented) this with OpenCV 2.3, it came in vain. The output image is binarized however, very unexpectedly. I did the thresholding continuously (i.e print the result to screen to analyze for each frame) and saw

opencv background substraction

半腔热情 提交于 2019-11-27 02:51:09
问题 I have an image of the background scene and an image of the same scene with objects in front. Now I want to create a mask of the object in the foreground with background substraction. Both images are RGB. I have already created the following code: cv::Mat diff; diff.create(orgImage.dims, orgImage.size, CV_8UC3); diff = abs(orgImage-refImage); cv::Mat mask(diff.rows, diff.cols, CV_8U, cv::Scalar(0,0,0)); //mask = (diff > 10); for (int j=0; j<diff.rows; j++) { // get the address of row j /

Efficient Background subtraction with OpenCV

亡梦爱人 提交于 2019-11-27 00:15:28
问题 I want to do background subtraction in a video file using OpenCV method. Right now I'm able to do background subtraction, but the problem is that I couldn't get the output in color mode. All the output after subtracting the background is coming in grayscale color mode :( . I want to get the color information to the foreground which is the resulting output after background got subtracted. Can I do it using masking technique?? like the following procedure which I'm thinking about. Capture Input

How can I quantify difference between two images?

蓝咒 提交于 2019-11-26 01:09:47
问题 Here\'s what I would like to do: I\'m taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don\'t want to store the latest snapshot. I imagine there\'s some way of quantifying the difference, and I would have to empirically determine a threshold. I\'m looking for simplicity rather than perfection. I\'m using python. 回答1: General idea Option 1: Load both images as arrays (

How can I quantify difference between two images?

青春壹個敷衍的年華 提交于 2019-11-25 23:08:45
Here's what I would like to do: I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot. I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold. I'm looking for simplicity rather than perfection. I'm using python. General idea Option 1: Load both images as arrays ( scipy.misc.imread ) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the