feature-detection

how to detect if the screen is capacitive or resistive in an Android device?

断了今生、忘了曾经 提交于 2019-11-30 21:25:17
I am developing an application that will behave slightly different depending on the type of screen. Is there any way to detect it? android.content.res.Configuration contains a value called touchscreen , which could be TOUCHSCREEN_STYLUS (=resistive), TOUCHSCREEN_FINGER (=capacitive), TOUCHSCREEN_NOTOUCH (=no touch screen), TOUCHSCREEN_UNDEFINED (=uh oh). EDIT: I got Dianne'd again :) So - bottom line, it seems like there is no way to get the actual physical properties of the screen. I guess your best bet is to have a setting to allow users to switch between your two modes. I have a small trick

DLIB : Training Shape_predictor for 194 landmarks (helen dataset)

旧时模样 提交于 2019-11-30 15:32:32
I am training DLIB 's shape_predictor for 194 face landmarks using helen dataset which is used to detect face landmarks through face_landmark_detection_ex.cpp of dlib library. Now it gave me an sp.dat binary file of around 45 MB which is less compared to file given ( http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_predictor_68_face_landmarks.dat.bz2 ) for 68 face landmarks. In training Mean training error : 0.0203811 Mean testing error : 0.0204511 and when I used trained data to get face landmarks position, IN result I got.. which are very deviated from the result got from 68

OpenCV - find bounding box of largest blob in binary image

江枫思渺然 提交于 2019-11-30 14:44:47
问题 What is the most efficient way to find the bounding box of the largest blob in a binary image using OpenCV? Unfortunately, OpenCV does not have specific functionality for blob detection. Should I just use findContours() and search for the largest in the list? 回答1: If you want to use OpenCV libs, check out OpenCVs SimpleBlobDetector. Here's another stack overflow showing a small tutorial of it: How to use OpenCV SimpleBlobDetector This only gives you key points though. You could use this as an

HOG features visualisation with OpenCV, HOGDescriptor in C++

被刻印的时光 ゝ 提交于 2019-11-30 10:51:03
问题 I use the HOGDescriptor of the OpenCV C++ Lib to compute the feature vectors of an images. I would like to visualize the features in the source image. Can anyone help me? 回答1: I had exactly the same problem today. Computing a HOGDescriptor vector for a 64x128 image using OpenCV's HOGDescriptor::compute() function is easy, but there is no built-in functionality to visualize it. Finally I managed to understand how the gradient orientation magnitudes are stored in the 3870 long HOG descriptor

Extract one object from bunch of objects and detect edges

心不动则不痛 提交于 2019-11-30 10:19:27
For my college project I need to identify a species of a plant from plant leaf shape by detecting edges of a leaf. (I use OpenCV 2.4.9 and C++), but the source image has taken in the real environment of the plant and has more than one leaf. See the below example image. So here I need to extract the edge pattern of just one leaf to process further. Using Canny Edge Detector I can identify edges of the whole image. But I don't know how to proceed from here to extract edge pattern of just one leaf, may be more clear and complete leaf. I don't know even if this is possible also. Can anyone please

OpenCV, Python: How to stitch two images of different sizes and transparent backgrounds

感情迁移 提交于 2019-11-30 07:48:16
问题 I've been working on a project where I stitch together images from a drone flying in a lawn mower pattern. I am able to stitch together images from a single pass (thanks to many answers on stackoverflow) but when I try to stitch two separate passes together laterally, the transformation my method produces is nonsensical. Here are the two images I am trying to stitch: And here is the code that I've been using to estimate a homography between the two, base and curr . base_gray = cv2.cvtColor

Does Convolutional Neural Network possess localization abilities on images?

落花浮王杯 提交于 2019-11-30 07:44:43
As far as I know, CNN rely on sliding window techniques and can only indicate if a certain pattern is present or not anywhere in given bounding boxes. Is that true? Can one achieve localization with CNN without any help of such techniques? Thats an open problem in image recognition. Besides sliding windows, existing approaches include predicting object location in image as CNN output, predicting borders (classifiyng pixels as belonging to image boundary or not) and so on. See for example this paper and references therein. Also note that with CNN using max-pooling, one can identify positions of

Skin Color Detection

岁酱吖の 提交于 2019-11-30 05:57:00
I am using the following algorithm to detect skin color, but its not working real well in different lighting conditions. Can anybody offer any advice how to improve it or suggest a better approach R > 95 AND G > 40 AND B > 20 AND max{R, G, B} – min{R, G, B} >15 AND |R – G| > 15 AND R > G AND R > B OR R > 220 AND G > 210 AND B > 170 AND |R – G| <= 15 AND R > B AND G > B http://softexpert.wordpress.com/2007/10/17/skin-color-detection/ Cheers Your given algorithm is simple colour based thresholding. This will only work for a very basic set of conditions. For a few pictures it may give really good

how to detect if the screen is capacitive or resistive in an Android device?

╄→尐↘猪︶ㄣ 提交于 2019-11-30 05:17:26
问题 I am developing an application that will behave slightly different depending on the type of screen. Is there any way to detect it? 回答1: android.content.res.Configuration contains a value called touchscreen , which could be TOUCHSCREEN_STYLUS (=resistive), TOUCHSCREEN_FINGER (=capacitive), TOUCHSCREEN_NOTOUCH (=no touch screen), TOUCHSCREEN_UNDEFINED (=uh oh). EDIT: I got Dianne'd again :) So - bottom line, it seems like there is no way to get the actual physical properties of the screen. I

Implementing a Harris corner detector

余生颓废 提交于 2019-11-29 22:39:24
I am implementing a Harris corner detector for educational purposes but I'm stuck at the harris response part. Basically, what I am doing, is: Compute image intensity gradients in x- and y-direction Blur output of (1) Compute Harris response over output of (2) Suppress non-maximas in output of (3) in a 3x3-neighborhood and threshold output 1 and 2 seem to work fine; however, I get very small values as the Harris response, and no point does reach the threshold. Input is a standard outdoor photography. [...] [Ix, Iy] = intensityGradients(img); g = fspecial('gaussian'); Ix = imfilter(Ix, g); Iy =