问题
I'm working on a project where I need to implement collision avoidance using OpenCV. This is to be done on iOS (iOS 5 and above will do).
Project Objective: The idea is to mount an iPad on the car's dashboard and launch the application. The application should grab frames from the camera and process these to detect if the car is going to collide with any obstacle.
I'm a novice to any sort of image processing, hence I'm getting stuck at conceptual levels in this project.
What I've done so far:
- Had a look at OpenCV and read about it on the net. Collision avoidance is implemented using Lukas-Kanade Pyramid method. Is this right?
Using this project as a starting point: http://aptogo.co.uk/2011/09/opencv-framework-for-ios/ It successfully runs on my iPad and capture functionality works as well, which means camera capture is well-integrated. I changed the processFrame implementation to try Optical Flow instead of Canny edge detection. Here is the function (incomplete yet).
-(void)processFrame { int currSliderVal = self.lowSlider.value; if(_prevSliderVal == currSliderVal) return; cv::Mat grayFramePrev, grayFrameLast, prevCorners, lastCorners, status, err; // Convert captured frame to grayscale for _prevFrame cv::cvtColor(_prevFrame, grayFramePrev, cv::COLOR_RGB2GRAY); cv::goodFeaturesToTrack(grayFramePrev, prevCorners, 500, 0.01, 10); // Convert captured frame to grayscale for _lastFrame cv::cvtColor(_lastFrame, grayFrameLast, cv::COLOR_RGB2GRAY); cv::goodFeaturesToTrack(grayFrameLast, lastCorners, 500, 0.01, 10); cv::calcOpticalFlowPyrLK(_prevFrame, _lastFrame, prevCorners, lastCorners, status, err); self.imageView.image = [UIImage imageWithCVMat:lastCorners]; _prevSliderVal = self.lowSlider.value; }
- Read about Optical Flow and how it is used (conceptually) to detect impending collision. Summary: If an object is growing in size, but moving towards any edge of the frame, then it is not a collision path. If an object is growing in size, but not moving towards any edge, then it is on collision path. Is this right?
- This project (http://se.cs.ait.ac.th/cvwiki/opencv:tutorial:optical_flow) appears to be doing exactly what I want to achieve. But I did not understand how it is doing so by reading the code. I cannot run it as I don't have a linux box. I read the explanation on this web-page, it seems to arrive at an homograph matrix. How is this result used in collision avoidance?
In addition to the above four points mentioned, I have read a lot more about this topic but still can't put all the pieces together.
Here are my questions (please remember I'm a novice at this)
HOW is optical flow used to detect impending collision? By this I mean, supposing I'm able to get correct result from the function cv::calcOpticalFlowPyrLK(), how do I take it forward from there to detect impending collision with any object on the frame? Is it possible to gauge distance from the object we are most likely to collide with?
Is there a sample working project which implements this or any similar functionality that I can have a look at. I had a look at the project on eosgarden.com, but no functionality seemed to be implemented in it.
In the above sample code, I'm converting lastCorners to UIImage and I'm displaying that image on screen. This shows me an image which only has colored horizontal lines on the screen, nothing similar to my original test image. Is this the correct output for that function?
I'm having a little difficulty understanding the datatypes used in this project. InputArray, OutputArray etc are the types accepted by OpenCV APIs. Yet in processFrame function, cv::Mat was being passed to Canny edge detection method. Do I pass cv::Mat to calcOpticalFlowPyrLK() for prevImage and nextImage?
Thanks in advance :)
Update: Found this sample project (http://www.hatzlaha.co.il/150842/Lucas-Kanade-Detection-for-the-iPhone). It does not compile on my mac, but I think from this I'll have a working code for optical flow. But still I cannot figure out, how I can detect impeding collision from tracking those points. If any of you can even answer Qts. No. 1, it will be of great help.
Update It looks like optical flow is used to calculate FoE (Focus of Expansion). There can be multiple FoE candidates. And using FoE, TTC (Time To Collision) is arrived at. I'm not very clear on the latter part. But, I am correct so far? Does OpenCV implement FoE and/or TTC?
回答1:
1
HOW is optical flow used to detect impending collision?
I've never used optical flow, but the first google request gave me this paper:
Obstacle Detection using Optical Flow
I don't know if you've already read it. It shows how to estimate time to contact at every angle.
2
This shows me an image which only has colored horizontal lines on the screen, nothing similar to my original test image.
I suppose that output of goodFeaturesToTrack is not an image, but a table of points. See, for example, how they are used in a Python example (in the old version of OpenCV). The same probably applies to the output of calcOpticalFlowPyrLK. Look what's there in debug first. I usually use Python + OpenCV to understand what's the output of unfamiliar OpenCV functions.
4
I'm having a little difficulty understanding the datatypes used in this project. InputArray, OutputArray etc are the types accepted by OpenCV APIs. Yet in processFrame function, cv::Mat was being passed to Canny edge detection method. Do I pass cv::Mat to calcOpticalFlowPyrLK() for prevImage and nextImage?
From the documentation:
This is the proxy class for passing read-only input arrays into OpenCV functions. ....
_InputArray
is a class that can be constructed fromMat
,Mat_<T>
,Matx<T, m, n>
,std::vector<T>
,std::vector<std::vector<T> >
orstd::vector<Mat>
. It can also be constructed from a matrix expression.
So you can just pass Mat
. Some older functions still expect only Mat
.
来源:https://stackoverflow.com/questions/11553861/collision-avoidance-using-opencv-on-ipad