Find interest point in surf Detector Algorithm

百般思念 提交于 2019-12-25 02:29:53

问题


I have tried hard But i am not able that how to find single point of interest in SURF Algorithm in Emgu CV. I wrote code for SURF. and I have problems that some times it goes in if statement near my numberd section "1" and some times it does not based on different images. why is that so? on the basis of that homography is calculated to not null. than I become able to draw circle or lines. which also have problem. circle or rectangle is drawn at 0,0 point on the image. Please help me. I will be grateful.

public Image<Bgr, Byte> Draw(Image<Gray, byte> conditionalImage, Image<Gray, byte> observedImage, out long matchTime)
    {
        //observedImage = observedImage.Resize(, INTER.CV_INTER_LINEAR);
        Stopwatch watch;
        HomographyMatrix homography = null;

        SURFDetector surfCPU = new SURFDetector(500, false);
        VectorOfKeyPoint modelKeyPoints;
        VectorOfKeyPoint observedKeyPoints;
        Matrix<int> indices;

        Matrix<byte> mask;
        int k = 2;
        double uniquenessThreshold = 0.8;
            //extract features from the object image
            modelKeyPoints = surfCPU.DetectKeyPointsRaw(conditionalImage, null);

            Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(conditionalImage, null, modelKeyPoints);

            watch = Stopwatch.StartNew();

            // extract features from the observed image
            observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
            Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
            BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
            matcher.Add(modelDescriptors);

            indices = new Matrix<int>(observedDescriptors.Rows, k);
            using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
            {
                matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
                mask = new Matrix<byte>(dist.Rows, 1);
                mask.SetValue(255);
                Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
            }

            int nonZeroCount = CvInvoke.cvCountNonZero(mask);

 //My Section number = 1
            if (nonZeroCount >= 4)
            {
                nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
                if (nonZeroCount >= 4)
                    homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
            }

            watch.Stop();

        //Draw the matched keypoints
            Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(conditionalImage,     modelKeyPoints, observedImage, observedKeyPoints,
                indices, new Bgr(Color.Blue), new Bgr(Color.Red), mask,     Features2DToolbox.KeypointDrawType.DEFAULT);






        #region draw the projected region on the image
        if (homography != null)
        {  //draw a rectangle along the projected model
            Rectangle rect = conditionalImage.ROI;
            PointF[] pts = new PointF[] { 
           new PointF(rect.Left, rect.Bottom),
           new PointF(rect.Right, rect.Bottom),
           new PointF(rect.Right, rect.Top),
           new PointF(rect.Left, rect.Top)};
            homography.ProjectPoints(pts);
            PointF _circleCenter = new PointF();
            _circleCenter.X = (pts[3].X + ((pts[2].X - pts[3].X) / 2));
            _circleCenter.Y = (pts[3].Y + ((pts[0].Y - pts[3].Y) / 2));

            result.Draw(new CircleF(_circleCenter, 15), new Bgr(Color.Red), 10);
            result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round),     true, new Bgr(Color.Cyan), 5);
        }
        #endregion

        matchTime = watch.ElapsedMilliseconds;

        return result;
    }

回答1:


modelKeyPoints = surfCPU.DetectKeyPointsRaw(conditionalImage, null);

After this line of code, you have in modelKeyPoints all the points of interest of the model Image. The same goes for the observed Image.

Once you have the key points of both images, you need to establish a relationship between the points in the observed image and the points in the model image. To achieve this, you use a knn algorithm:

using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
    matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
    mask = new Matrix<byte>(dist.Rows, 1);
    mask.SetValue(255);
    Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}

Basically this will calculate, for each point in model image, the 2 (k) nearest points in the observed image. If the distance ratio of the 2 points is less than 0.8 (uniquenessThreshold) it's considered that can't match that point safely. For this process, you use a mask that works both as input and output: as input it says what points are needed to be matched and as output it says what points were matched correctly.

Then, the number of non-zero values in the mask will be the number of points matched.



来源:https://stackoverflow.com/questions/23637916/find-interest-point-in-surf-detector-algorithm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!