问题
I m working on small image processing assignment where I need to track 4 red color object. I got how to track single one. I want to know what is the best approach to track more than one point.
There are 4 points which are positioned to form a rectangle so can I use shape detection or corner detection to detect and track the points Please see image below..
回答1:
My naive implementation uses a technique described at OpenCV bounding boxes to do the tracking of red blobs.
The following is a helper function used to retrieve the center of all the red objects that were detected:
/* get_positions: a function to retrieve the center of the detected blobs.
* largely based on OpenCV's "Creating Bounding boxes and circles for contours" tutorial.
*/
std::vector<cv::Point2f> get_positions(cv::Mat& image)
{
if (image.channels() > 1)
{
std::cout << "get_positions: !!! Input image must have a single channel" << std::endl;
return std::vector<cv::Point2f>();
}
std::vector<std::vector<cv::Point> > contours;
cv::findContours(image, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
// Approximate contours to polygons and then get the center of the objects
std::vector<std::vector<cv::Point> > contours_poly(contours.size());
std::vector<cv::Point2f> center(contours.size());
std::vector<float> radius(contours.size());
for (unsigned int i = 0; i < contours.size(); i++ )
{
cv::approxPolyDP(cv::Mat(contours[i]), contours_poly[i], 5, true );
cv::minEnclosingCircle((cv::Mat)contours_poly[i], center[i], radius[i]);
}
return center;
}
I wrote the code to test my approach in real-time by capturing frames from a webcam. The overall procedure is quite similar to what @Dennis described (sorry, I was already coding when you submitted your answer).
OK, so this is where the fun actually begins.
int main()
{
// Open the capture device. My webcam ID is 0:
cv::VideoCapture cap(0);
if (!cap.isOpened())
{
std::cout << "!!! Failed to open webcam" << std::endl;
return -1;
}
// Let's create a few window titles for debugging purposes
std::string wnd1 = "Input", wnd2 = "Red Objs", wnd3 = "Output";
// These are the HSV values used later to isolate RED-ish colors
int low_h = 160, low_s = 140, low_v = 50;
int high_h = 179, high_s = 255, high_v = 255;
cv::Mat frame, hsv_frame, red_objs;
while (true)
{
// Retrieve a new frame from the camera
if (!cap.read(frame))
break;
cv::Mat orig_frame = frame.clone();
cv::imshow(wnd1, orig_frame);
orig_frame
:
// Convert BGR frame to HSV to be easier to separate the colors
cv::cvtColor(frame, hsv_frame, CV_BGR2HSV);
// Isolate red colored objects and save them in a binary image
cv::inRange(hsv_frame,
cv::Scalar(low_h, low_s, low_v),
cv::Scalar(high_h, high_s, high_v),
red_objs);
// Remove really small objects (mostly noises)
cv::erode(red_objs, red_objs, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)));
cv::dilate(red_objs, red_objs, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(7, 7)));
cv::Mat objs = red_objs.clone();
cv::imshow(wnd2, objs);
objs
:
// Retrieve a vector of points with the (x,y) location of the objects
std::vector<cv::Point2f> points = get_positions(objs);
// Draw a small green circle at those locations for educational purposes
for (unsigned int i = 0; i < points.size(); i++)
cv::circle(frame, points[i], 3, cv::Scalar(0, 255, 0), -1, 8, 0);
cv::imshow(wnd3, frame);
char key = cv::waitKey(33);
if (key == 27) { /* ESC was pressed */
//cv::imwrite("out1.png", orig_frame);
//cv::imwrite("out2.png", red_objs);
//cv::imwrite("out3.png", frame);
break;
}
}
cap.release();
return 0;
}
回答2:
Here is my implementation on GitHub: https://github.com/Smorodov/Multitarget-tracker video on youtube: http://www.youtube.com/watch?v=2fW5TmAtAXM&list=UUhlR5ON5Uqhi_3RXRu-pdVw
In short:
- Detect objects. This step provides a set of points (detected objects coordinates).
- Solve an assignment problem (rectangular Hungarian algorithm). this step assigns detected objects to existing tracks.
- Manage unassigned/lost tracks. This step deletes tracks with too many missed detections in a row and adds tracks for new detections.
- Apply statistical filter for each track (Kalman filter in this case), for predict missing detections and smooth tracks using objects objects dynamics information (defined in Kalman filter matrices).
BTW, to get coordinates of 4 points, you need to know coordinates only for 3 points, because your pattern is rectangular, you can compute 4-th point.
回答3:
Here are the steps for multiple colored object tracking:
For each frame do the following steps:
- Convert your input image (BGR) to HSV color space using cv::cvtColor()
- Segment red colored objects using cv::inRange()
- Extract each blob from the image using cv::findContours()
- For each blob calculate its center using cv::boundingRect()
- Match the blobs from the previous frame plus some movement to the current blobs comparing the distances between their centers --> Match them if they have the smallest distance
This is the basis of the algorithm. Then you have to handle situations, when blobs enter the image (this is the case when there is a blob in the current frame, but no close blob from the previous frame) or leave the image (there is a blob in the previous frame, but no close blob in the current frame).
来源:https://stackoverflow.com/questions/25771252/multiple-tracking-in-a-video