feature-detection

Is the a specific webkit feature I can detect to check if a user is using a webkit browser

不羁的心 提交于 2019-12-11 03:46:58
问题 I've used the code below if I need to detect Firefox: var firefox = !(window.mozInnerScreenX == null); I'm curious if there is something similar to detect webkit browsers without checking the user agent string. Like checking a specific feature only webkit browsers have? 回答1: Go to the console in Chrome and type window.webkit - the autocompletion will show you a stack of properties that should be Webkit specific (e.g. webkitRequestAnimationFrame , webkitAudioContext , etc) 回答2: From my

facial expression classification using k-means

不想你离开。 提交于 2019-12-11 01:48:10
问题 My method for classifying facial expressions using k-means is: Use opencv to detect the face in the image Use ASM and stasm to get the facial feature point Calculate the distance between facial features (as show in the picture). There'll be 5 distances. Calculate the centroid for each distance for each facial expression (exp: in the distance D1 there are 7 centroids for each expression 'happy, angry...'). Use 5 k-means each k-means for a distance and each k-means will have as a result the

How to apply Ratio Test in order to remove outliers in a multiple object detection matcher?

无人久伴 提交于 2019-12-10 19:39:07
问题 I'm developing an iPad prototype that needs to detect multiple instances of an known object. I'm using SIFT for feature detection and descriptor extraction and BruteForce matcher. The first problem I had to solve was how to cluster the matches in order to separate each object instance in the scene. This was done by finding the nearest matches, using two or tree steps for best results. Now I'm working in how to reduce the outliers and I'm trying to use the Ratio Test in order to remove them.

How to identify changes in two images of same object

最后都变了- 提交于 2019-12-10 18:26:06
问题 I have two images which I know represent the exact same object. In the picture below, they are referred as Reference and Match. The image Match can undergo the following transformations compared to Reference: The object may have changed its appearance locally by addition(e.g. dirt or lettering added to the side) or omission (side mirror has been taken out). Stretched or reduced in size horizontally only (it is not resized in vertical direction) Portions of Reference image are not present in

OpenCV 3.0.0 SurfFeatureDetector and SurfDescriptorExtractor Errors

夙愿已清 提交于 2019-12-10 16:38:21
问题 I am attempting to implement the OpenCV 3.0.0 SURF Feature Description and Detection but after running the sample code on the OpenCV site, I receive a load of errors all related to SURF. Any idea of what could be going wrong? Thanks! #include <stdio.h> #include <iostream> #include "opencv2/core.hpp" #include "opencv2/features2d.hpp" #include "opencv2/highgui.hpp" #include "opencv2/calib3d.hpp" #include "opencv2/xfeatures2d.hpp" #include <opencv2/nonfree/nonfree.hpp> using namespace cv; using

Susan Corner Detector implementation

橙三吉。 提交于 2019-12-10 12:27:23
问题 I have not understood the code and the way function is handled.. can you elaborate the function declaration fun = @(img) susanFun(img); map = nlfilter(img,maskSz,fun); Also in susan corner detector we have only 2 threshold values.. "t and g".. but here we have "thGeo,thGeo1,thGeo2,thT,thT1" I am not able to understand the method employed here: function [ map r c ] = susanCorner( img ) %SUSAN Corner detection using SUSAN method. % [R C] = SUSAN(IMG) Rows and columns of corner points are

OpenCV density of feature points

 ̄綄美尐妖づ 提交于 2019-12-10 12:18:12
问题 Using OpenCV SIFT algorithm i am able to get the matching and non matching feature points between 2 images. My solution is here The distribution of matched(green) and non-matched(red) feature points is as shown below.(i cant reveal the actual image. but the image contains mostly text) I want to calculate a density function for the matching and non matching points on an image(i.e. given a nXn area on the image, density function should give how many matching points are present inside this nXn

Feature extraction from neural networks

跟風遠走 提交于 2019-12-09 16:40:56
问题 I'm doing simple recognition of letters and digits with neural networks. Up to now I used every pixel of letter's image as the input to the network. Needless to say this approach produces networks which are very large. So I'd like to extract features from my images and use them as inputs to NNs. My first question is what properties of the letters are good for recognizing them. Second question is how represent these features as inputs to neural networks. For example, I may have detected all

How to improve features detection in opencv

家住魔仙堡 提交于 2019-12-07 17:07:11
问题 I am working on a project that I need to detect features on images using opencv. I am using SURF detector; SURF extractor; BFMatcher matcher; for detection, extraction and matching points. It works well for some images, but fails on some other images. For example, the system fails on this image: Apparently, this image has some texture and the feature detector should detect them, but no feature is detected and consequently no match is generated. How can I improve this feature detection? Can I

white pixels cluster extraction

北战南征 提交于 2019-12-07 13:30:59
问题 I am working on a fingerprint pore extraction project and stuck at the last stage of pore (white pixels clusters) extraction..I am having two output images from which we will get the pores but don't know how to do it..also the two images are of different size..image1 of size 240*320 and image2 is of size 230*310 ..here are my images.. image 1 (240*320) image2 (230*310) here is what i am doing to extract white clusters of pores.. for i = 1:230 for j = 1:310 if image1(i,j)==1 && image2(i,j)==1