sift

What's the order of parameters in vl_ubcmatch?

谁说胖子不能爱 提交于 2019-12-13 18:25:18
问题 I am using VLFEAT implementation of SIFT to compute SIFT descriptors on two set of images: queries and database images. Given a set of queries, I'd like to obtain the closest descriptors from a big database of descriptors, for which I use vl_ubcmatch. Having vl_ubcmatch syntax as MATCHES = vl_ubcmatch(DESCR1, DESCR2) I obtain different results if I input the query descriptors first and the database descriptors as a second parameter or the other way around. Which is the correct syntax? 1)

Content Based Image Retrieval (CBIR): Bag of Features or Descriptors Matching?

泄露秘密 提交于 2019-12-13 00:19:20
问题 I've read a lot of papers about the Nearest Neighbor problem, and it seems that indexing techniques like randomized kd-trees or LSH has been successfully used for Content Based Image Retrieval (CBIR), which can operate in high dimensional space. One really common experiment is given a SIFT query vector, find the most similar SIFT descriptor in the dataset. If we repeat the process with all the detected SIFT descriptors we can find the most similar image. However, another popular approach is

implementing Bags of Words object recognition using VLFEAT

你离开我真会死。 提交于 2019-12-12 08:59:44
问题 I am trying to implement a BOW object recognition code in matlab. The process is slightly complicated and I've had a lot of trouble finding proper documentation on the procedure. So could someone double check if my plan below makes sense? I'm using the VLSIFT library extensively here Training: 1. Extract SIFT image descriptor with VLSIFT 2. Quantize the descriptors with k-means(vl_hikmeans) 3. Take quantized descriptors and create histogram(VL_HIKMEANSHIST) 4. Create SVM from histograms(VL

OpenCV-Python dense SIFT

☆樱花仙子☆ 提交于 2019-12-12 07:14:23
问题 OpenCV has very good documentation on generating SIFT descriptors, but this is a version of "weak SIFT", where the key points are detected by the original Lowe algorithm. The OpenCV example reads something like: img = cv2.imread('home.jpg') gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) sift = cv2.SIFT() kp = sift.detect(gray,None) kp,des = sift.compute(gray,kp) What I'm looking for is strong/dense SIFT, which does not detect keypoints but instead calculates SIFT descriptors for a set of patches

How to find descriptors for manualy defined landmarks with opencv

核能气质少年 提交于 2019-12-12 04:49:09
问题 I am trying to generate "SIFT" descriptor (SIFT is only an example) for some manualy defined landmarks. When I try to do: siftDetector(grayImage, Mat(), manualLandmarks, descriptors, true); the result is always 0 (zero) for the descriptors. I have described manualLandmarks as std::vector<cv::KeyPoint> , and I have changed the x and y coordinates for each item in the vector (the size, octave and angle values are not changed). Is there a way to define manualy the image coordinates and compute

Sift implementation in Opencv 2.4.6 ubuntu

假装没事ソ 提交于 2019-12-11 23:25:47
问题 I am trying to compute correspondence between 2 images and am actually interested in the number of correspondence points, rather than the correspondence themselves, so that I can sue it to get the best match image. This is my following code : #include<iostream> #include<vector> #include<string> #include "cv.h" #include "highgui.h" #include "opencv2/imgproc/imgproc.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/legacy/legacy.hpp" #include "opencv2/objdetect/objdetect.hpp"

Assertion Error while implementing SIFT algorithm :opencv

橙三吉。 提交于 2019-12-11 21:31:04
问题 I am using opencv 2.4.8 and studio 2013, n i am getting a run time error.my main code is this.. #include<opencv2\core\core.hpp> #include<opencv2\highgui\highgui.hpp> #include<opencv2\imgproc\imgproc.hpp> #include"SIFT.h" #include<iostream> #include<stdio.h> #include<conio.h> using namespace cv; using namespace std; int main() { cout << "hello"; Mat image = imread("abc.jpg",0); cout << image.channels() << endl; SIFT controller(image); controller.DoSIFT(); waitKey(100000); } and my header file

Null/No SIFT descriptor and keypoints generated in python

寵の児 提交于 2019-12-11 17:52:21
问题 I am trying to create a codebook from a set of image patches. I have divided the images (Caltech 101) into 20 X 20 image patches. I want to create a SIFT descriptor for each patch. But for some of the image patches, it does not return any descriptor/keypoint. I have tried using OpenCV and vlfeat. The behavior is same using any of the libraries. Following is my code using OpenCV - sift = cv2.SIFT() img = cv2.imread('patch_temp.jpg',0) imgptch = cv2.imread('image_patch.jpg',cv2.CV_LOAD_IMAGE

How to use SIFT algorithm with a color inverted image

佐手、 提交于 2019-12-11 16:49:30
问题 For example I have two images, where first one is a regular and second one with a color inversion (I mean 255 - pixel color value). I've applied SIFT algorithm to both of them using OpenCV and Lowe paper, so now I have key points and descriptors of each image. KeyPoints positions do match, but KeyPoints orientations and Descriptors values do not, because of color inversion. I'm curious do anybody try to solve such a problem? In addition here are the gradients example: I'm using OpenCV C++

How to draw bounding box on best matches?

徘徊边缘 提交于 2019-12-11 10:45:09
问题 How can I draw a bounding box on best matches in BF MATCHER using Python? 回答1: Here is a summary of the approach it should be a proper solution: Detect keypoints and descriptors on the query image (img1) Detect keypoints and descriptors on the target image (img2) Find the matches or correspondences between the two sets of descriptors Use the best 10 matches to form a transformation matrix Transform the rectangle around img1 based on the transformation matrix Add offset to put the bounding box