surf

Issue: Bag of Features Training SIFT or SURF for car detection within Video with OpenCV + Python

主宰稳场 提交于 2020-01-01 19:59:34
问题 I am trying to dump keypoints of cars with SIFT or SURF and match these keypoints to a video in order to detect cars. Keypoints are more convenient to use instead of Haar Cascades because I would have to use a lot of images for example 5000 to train, which will take a lot of computation process. Keypoints from SURF or SIFT are scale invariant which will be almost the same in every car. The code for dumping keypoints into a txt file is: import cv2 import numpy as np import os import cPickle

Why we need crossCheckMatching for feature?

限于喜欢 提交于 2020-01-01 00:43:06
问题 I am reading lot of post for object detection using feature extraction (sift ecc). After having calculate descriptors on both images, to get good matches they are using crossCheckMatching. (found on sample/cpp/descritpor_extractor_matcher.cpp) Coudl I understand why this choice? Why we need to evalute both descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn ); descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn ); I don't understand it. Computing the

Converting Mat to Keypoint?

杀马特。学长 韩版系。学妹 提交于 2019-12-31 05:00:48
问题 I'm writing both descriptors (SurfDescriptorExtractor output) and keypoints (SurfFeatureDetector output) to an XML file. Before writing keypoints (std::vector) conversion to Mat is done ( following this: convert keypoints to mat or save them to text file opencv ). For descriptors isn't neccesary, they're Mat already. So both are saved as Mat, there's no problem on reading either. But when using a FlannBasedMatcher, and then drawMatches, this method asks for keypoint data. The question is: how

OpenCV image comparison in Android

依然范特西╮ 提交于 2019-12-28 02:35:27
问题 [EDIT] I have devised some code for image comparison. The matching part is still a bit flawed and I would love some assitance. The project can be found at - GitHub. I have these two images Img1 and Img2 : When I use the following command in openCV Mat img1 = Highgui.imread("mnt/sdcard/IMG-20121228.jpg"); Mat img2 = Highgui.imread("mnt/sdcard/IMG-20121228-1.jpg"); try{ double l2_norm = Core.norm( img1, img2 ); tv.setText(l2_norm+""); } catch(Exception e) { //image is not a duplicate } I get a

Hessian-affine detector in OpenCV

南楼画角 提交于 2019-12-25 04:57:09
问题 I've read about this detector in many papers and articles (though I don't know it in details) and I've read that it is much better than DoG in many situations. Initially, I thought the Hessian-affine was the SURF detector, but they're not the same thing, right? Is there any OpenCV implementation? 回答1: Sorry, no implementation of this detector in OpenCV (http://code.opencv.org/issues/1628) No, it's not the same thing. Long story short, SURF is a "family" of detectors and descriptors, and there

Find interest point in surf Detector Algorithm

百般思念 提交于 2019-12-25 02:29:53
问题 I have tried hard But i am not able that how to find single point of interest in SURF Algorithm in Emgu CV. I wrote code for SURF. and I have problems that some times it goes in if statement near my numberd section "1" and some times it does not based on different images. why is that so? on the basis of that homography is calculated to not null. than I become able to draw circle or lines. which also have problem. circle or rectangle is drawn at 0,0 point on the image. Please help me. I will

How to output the following result to terminal

南楼画角 提交于 2019-12-25 01:17:49
问题 I am using code position calculation. So how could I show the output the result in the terminal? #include <ros/ros.h> #include <std_msgs/Float32MultiArray.h> #include <opencv2/opencv.hpp> #include <QTransform> #include <geometry_msgs/Point.h> #include <std_msgs/Int16.h> #include <find_object_2d/PointObjects.h> #include <find_object_2d/Point_id.h> #define dZ0 450 #define alfa 40 #define h 310 #define d 50 #define PI 3.14159265 void objectsDetectedCallback(const std_msgs::Float32MultiArray& msg

Error in implementing realtime camera based GPU_SURF in opencv

ⅰ亾dé卋堺 提交于 2019-12-24 12:18:40
问题 As the CPU based SURF in opencv was very slow for realtime application, we decided to use GPU_SURF, after setting up the opencv_gpu we made the following code: #include <iostream> #include <iomanip> #include <windows.h> #include "opencv2/contrib/contrib.hpp" #include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/gpu/gpu.hpp" #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include

Clustering SURF features of an image dataset using k-means algorithm

為{幸葍}努か 提交于 2019-12-24 08:31:09
问题 I want to implement Bag of Visual Words in MATLAB. First I read images from dataset directory and I detect SURF features and extract them using these two functions detectSURFFeatures and extractFeatures . I store each feature into a cell array and finally I want to cluster them using the k-means algorithm but I can't fit this data into k-means function input. How can I insert SURF features into the k-means clustering algorithm in MATLAB? Here is my sample code which reads image from files and

How to detect object in a video using SURF and C?

為{幸葍}努か 提交于 2019-12-24 03:27:08
问题 I used a SURF program from a tutorial to detect object in a video frame. but that detects all the key points and descriptors. how i change the program to detect only specific object? CvSeq *imageKeypoints = 0, *imageDescriptors = 0; int i; CvSURFParams params = cvSURFParams(500, 1); cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params ); printf("Image Descriptors: %d\n", imageDescriptors->total); for( i = 0; i < imageKeypoints->total; i++ ) { CvSURFPoint* r =