surf

OpenCV - Object matching using SURF descriptors and BruteForceMatcher

不想你离开。 提交于 2019-12-17 21:52:25
问题 I have a question about objects matching with OpenCV. I'm useing SURF algorithm implemented in opencv 2.3 to first detect features on each image, and then extracting the descriptors of these features. The problem in matching using Brute Force Matcher, I don't know how I judge that the two images are matched or not that's as when I'm using two different images there are lines between descriptors in the two images! These outputs of my code, either the two images -I compare with them - are

How does the box filter look like (SURF) (DoB)

别说谁变了你拦得住时间么 提交于 2019-12-14 03:24:09
问题 I'm trying to understand how the SURF alg. works. The understanding of the integral image is clear. But how does the box filter look like in horizontal, vertikal and diagonal direction. Could some on give me an example? 回答1: Box filter is an average filter where all pixels are weighted equally. A normalized 5x5 box filter kernel: 来源: https://stackoverflow.com/questions/36161572/how-does-the-box-filter-look-like-surf-dob

EmguCV SURF - Determine matched pairs of points

元气小坏坏 提交于 2019-12-13 19:44:15
问题 I'm currently modifying EmguCV's (Ver 3.0.0.2157) SurfFeature example (Seen here). I'm trying to determine the amount of matched pairs of points in order to calculate a percentage of similarity between the inputted images. From what I understand, this information is stored in the mask variable, but I don't know how to access it? (This question has been asked before here, but the example source code being referenced is using an older version of EmguCV) Thanks in advance! 回答1: p determine

Improving the performance of SURF on small images

别来无恙 提交于 2019-12-13 18:45:34
问题 Every implementation of SURF I have come across on the web seems to be particularly bad at extracting a useful number of interest points from small images (say, 100x100 or less). I have tried a number of approaches: 1) Using various upscaling algorithms (from simple one like nearest-neighbor to more advanced ones - basically every upscaler imagemagick provides) to increase the size of small images before analysis. 2) Other image processing tweaks to bring out features in the image such as

How to modify datatype of Matrix<float> to array?

早过忘川 提交于 2019-12-12 02:25:18
问题 I am using Emgu CV (v2.4) with C#. In the following class. I need to modify the data type of the used column in the table to array. public void FindSURF(Image<Gray, Byte> modelImage) { VectorOfKeyPoint modelKeyPoints; SURFDetector surfCPU = new SURFDetector(500, false); //extract features from the object image modelKeyPoints = new VectorOfKeyPoint(); Matrix<float> modelDescriptors = surfCPU.DetectAndCompute(modelImage, null, modelKeyPoints); } the SURF feature extract and store in Matrix

Use SURF to match images in OpenCV\EmguCV

送分小仙女□ 提交于 2019-12-12 01:34:50
问题 I'm working on the source code from here. It seems that indices variable stores the match information, but I don't know how the information is stored. For example, can you tell me how many matched pair of points are found? Which point matches which point? 回答1: Take a look on this line. Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints, indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DToolbox

Cannot create custom message

孤街浪徒 提交于 2019-12-11 17:39:59
问题 Hi i am currently trying to create a custom message for an exisitng package however i create Point_id.msg but when i included it as a header file in my code, i receive the following error /home/111/222/333/find_object_2d/src/objects_detected.cpp:7:41: fatal error: find_object_2d/PointObjects.h: No such file or directory compilation terminated. make[2]: *** [find_object_2d/CMakeFiles/objects_detected.dir/src/objects_detected.cpp.o] Error 1 make[1]: *** [find_object_2d/CMakeFiles/objects

Loading saved SURF keypoints

六月ゝ 毕业季﹏ 提交于 2019-12-11 13:25:55
问题 I am detecting SURF features in an image and then writing them to a yml file. I then want to load the features from the yml file again to try and detect an object but at the moment I'm having trouble loading the keypoints to draw them on an image. I'm writing the keypoints like so: cv::FileStorage fs("keypointsVW.yml", cv::FileStorage::WRITE); write(fs, "keypoints_1", keypoints_1); fs.release(); I am trying to read them like so: cv::FileStorage fs2("keypointsVW.yml", cv::FileStorage::READ);

Is it possible to get the rotation and scale between two images with only a Surf Descriptor of each?

点点圈 提交于 2019-12-11 12:03:39
问题 I'm using Surf for landmark recognition. This is the process I thought: 1) save before hand one Surf Descriptor for each landmark 2) A user takes a photo of a landmark (eg building) 3) A Surf Descriptor is computed for this image (the photo) 4) This descriptor is compared against each landmark descriptor stored and the one with the lowest DMatch.distance between the 11 closest Feature Points is choosen as the landmark recognized 5) I want to calculate the rotation and scale-ratio between the

SiftFeatureDetector .detect function broken?

假如想象 提交于 2019-12-11 09:19:15
问题 Ive been trying out SIFT/SURF from online resources and wanted to test it out myself. I first tried without the non-free libraries using this code: int _tmain(int argc, _TCHAR* argv[]) { Mat img = imread("c:\\car.jpg", 0); Ptr<FeatureDetector> feature_detector = FeatureDetector::create("SIFT"); vector<KeyPoint> keypoints; feature_detector->detect(img, keypoints); Mat output; drawKeypoints(img, keypoints, output, Scalar(255, 0, 0)); namedWindow("meh", CV_WINDOW_AUTOSIZE); imshow("meh", output)