opticalflow

OpenCV GPU Farneback Optical Flow badly works in multi-threading

我的梦境 提交于 2019-12-06 00:58:32
问题 My application uses the Opencv gpu class gpu::FarnebackOpticalFlow to compute the optical flow between a pair of consecutive frames of an input video. In order to speed-up the process, I exploited the TBB support of OpenCV to run the method in multi-threading. However, the multi-threading performance does not behave like the single-threaded one. Just to give you an idea of the different behaviour, here are two snapshots, respectively of the single threaded and the multi threaded

Background subtraction and Optical flow for tracking object in OpenCV C++

爷,独闯天下 提交于 2019-12-05 16:35:05
I am working on a project to detect object of interest using background subtraction and track them using optical flow in OpenCV C++. I was able to detect the object of interest using background subtraction. I was able to implement OpenCV Lucas Kanade optical flow on separate program. But, I am stuck at how to these two program in a single program. frame1 holds the actual frame from the video, contours2 are the selected contours from the foreground object. To summarize, how do I feed the forground object obtained from Background subtraction method to the calcOpticalFlowPyrLK ? Or, help me if my

How to compute optical flow using tvl1 opencv function

戏子无情 提交于 2019-12-04 23:58:19
问题 I'm trying to find python example for computing optical flow with tvl1 opencv function createOptFlow_DualTVL1 but it seems that there isn't enough documentation for it. Could anyone please let me do that? I've used calcOpticalFlowFarneback mentioned here http://docs.opencv.org/master/d7/d8b/tutorial_py_lucas_kanade.html but it is not giving me accurate results, will tvl1 good enough and if not is there another method I should look for? [[EDIT]] I've some regions come from selective search, I

Collision Avoidance using OpenCV on iPad

为君一笑 提交于 2019-12-04 14:42:54
I'm working on a project where I need to implement collision avoidance using OpenCV. This is to be done on iOS (iOS 5 and above will do). Project Objective: The idea is to mount an iPad on the car's dashboard and launch the application. The application should grab frames from the camera and process these to detect if the car is going to collide with any obstacle. I'm a novice to any sort of image processing, hence I'm getting stuck at conceptual levels in this project. What I've done so far: Had a look at OpenCV and read about it on the net. Collision avoidance is implemented using Lukas

OpenCV warping image based on calcOpticalFlowFarneback

风流意气都作罢 提交于 2019-12-04 03:05:05
I'm trying to perform a complex warp of an image using Dense Optical Flow (I am trying to wap the second image into roughly the same shape as the first image). I'm probably getting this all wrong but Ill post up what I've tried: cv::Mat flow; cv::calcOpticalFlowFarneback( mGrayFrame1, mGrayFrame2, flow, 0.5, 3, 15, 3, 5, 1.2, 0 ); cv::Mat newFrame = cv::Mat::zeros( frame.rows, frame.cols, frame.type() ); cv:remap( frame, newFrame, flow, cv::Mat(), CV_INTER_LINEAR ); The idea if that I am calculating the flow mat from 2 gray scale frames. I get back a flow mat which seems to make sense but now

How to compute optical flow using tvl1 opencv function

亡梦爱人 提交于 2019-12-03 14:50:37
I'm trying to find python example for computing optical flow with tvl1 opencv function createOptFlow_DualTVL1 but it seems that there isn't enough documentation for it. Could anyone please let me do that? I've used calcOpticalFlowFarneback mentioned here http://docs.opencv.org/master/d7/d8b/tutorial_py_lucas_kanade.html but it is not giving me accurate results, will tvl1 good enough and if not is there another method I should look for? [[EDIT]] I've some regions come from selective search, I want keep only regions with motion in it, so computing the OF for a given frame and then get the avg in

Lucas Kanade Optical Flow, Direction Vector

元气小坏坏 提交于 2019-12-03 14:00:43
问题 I am working on optical flow, and based on the lecture notes here and some samples on the Internet, I wrote this Python code. All code and sample images are there as well. For small displacements of around 4-5 pixels, the direction of vector calculated seems to be fine, but the magnitude of the vector is too small (that's why I had to multiply u,v by 3 before plotting them). Is this because of the limitation of the algorithm, or error in the code? The lecture note shared above also says that

What is the difference between sparse and dense optical flow?

家住魔仙堡 提交于 2019-12-03 07:50:14
问题 Lots of resources say that there are two types optical flow algorithms. And Lucas-Kanade is a sparse technique, but I can't find the meanings of sparse and dense? Can some one tell me what is the difference between dense and sparse optical flow? 回答1: The short explanation is, sparse techniques only need to process some pixels from the whole image, dense techniques process all the pixels. Dense techniques are slower but can be more accurate, but in my experience Lucas-Kanade accuracy might be

Lucas Kanade Optical Flow, Direction Vector

眉间皱痕 提交于 2019-12-03 03:22:52
I am working on optical flow, and based on the lecture notes here and some samples on the Internet, I wrote this Python code . All code and sample images are there as well. For small displacements of around 4-5 pixels, the direction of vector calculated seems to be fine, but the magnitude of the vector is too small (that's why I had to multiply u,v by 3 before plotting them). Is this because of the limitation of the algorithm, or error in the code? The lecture note shared above also says that motion needs to be small "u, v are less than 1 pixel", maybe that's why. What is the reason for this

Fast, very lightweight algorithm for camera motion detection?

丶灬走出姿态 提交于 2019-12-03 03:19:54
问题 I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found