image-segmentation

How to annotate the ground truth for image segmentation?

心不动则不痛 提交于 2019-12-05 01:08:25
问题 I'm trying to train a CNN model that perform image segmentation, but I'm confused how to create the ground truth if I have several image samples? Image segmentation can classify each pixel in input image to a pre-defined class, such as cars, buildings, people, or any else. Is there any tools or some good idea to create the ground truth for image segmentation? Thanks! 回答1: Try out https://www.labelbox.io/. Here is what their image segmentation template looks like... A lot of the code is open

about detecting iris and pupil circles using hough circle in java opencv

偶尔善良 提交于 2019-12-04 20:31:41
I'm using opencv in Java to try to detect circles (iris, and pupil) in images with eyes, but I didn't get the expected results. Here is my code // convert source image to gray org.opencv.imgproc.Imgproc.cvtColor(mRgba, imgCny, Imgproc.COLOR_BGR2GRAY); //fliter org.opencv.imgproc.Imgproc.blur(imgCny, imgCny, new Size(3, 3)); //apply canny org.opencv.imgproc.Imgproc.Canny(imgCny, imgCny, 10, 30); //apply Hough circle Mat circles = new Mat(); Point pt; org.opencv.imgproc.Imgproc.HoughCircles(imgCny, circles, Imgproc.CV_HOUGH_GRADIENT, imgCny.rows() / 4, 2, 200, 100, 0, 0); //draw the found

Text extraction and segmentation open CV

☆樱花仙子☆ 提交于 2019-12-04 20:25:27
I've never used OpenCV before, but I'm trying to write my neural network system to recognize text and I need some tool for text extraction/ segmentation. How can I use java OpenCV to preprocess and segmentate an image containing text. I don't need to recognize the text, I just need to get each letter in a separate image. Something like this : Try this code .No need of OpenCV import java.awt.image.BufferedImage; import java.util.ArrayList; import java.util.List; import org.neuroph.imgrec.ImageUtilities; public class CharExtractor { private int cropTopY = 0;//up locked coordinate private int

Accurately detect color regions in an image using K-means clustering

与世无争的帅哥 提交于 2019-12-04 18:35:30
I'm using K-means clustering in color-based image segmentation. I have a 2D image which has 3 colors, black, white, and green. Here is the image, I want K-means to produce 3 clusters, one represents the green color region, the second one represents the white region, and the last one represents the black region. Here is the code I used, %Clustering color regions in an image. %Step 1: read the image using imread, and show it using imshow. img = (imread('img.jpg')); figure, imshow(img), title('X axis rock cut'); %figure is for creating a figure window. text(size(img,2),size(img,1)+15,...

How to implement a conditional random field based energy function from images?

十年热恋 提交于 2019-12-04 18:17:34
I am trying to implement some segmentation tool for my images, and I am trying to use conditional random field (CRF) based method. For example, in this paper . The standard CRF energy function includes two parts, i.e., a unary potential and a pairwise potential where L are the class labels and X are the observations (image pixels). I have got some training image with labels of the objects in the image. For example, I have got the ground truth segmentation of the objects in the image with labels. If I want to use texture of these objects as the feature, I am wondering how to implement and do

WindowScrollWheelFcn with slider in Matlab GUI

て烟熏妆下的殇ゞ 提交于 2019-12-04 17:21:50
I'm making a GUI in Matlab that scrolls through and displays ~600 medical images. I have an axes on which the images are displayed, and a scrollbar that presently goes through images one at a time when the end arrows are pressed. I'm trying to figure out how to incorporate the WindowScrollWheelFcn so I can use the scroll on the mouse to go through the images faster. This is my code: function ct_slider_Callback(hObject, eventdata, handles) set(gcf, 'WindowScrollWheelFcn', @wheel); set(gcf, 'CurrentAxes', handles.ct_image_axes); handles.currentSlice = round(get(handles.ct_slider, 'Value'));

Why is color segmentation easier on HSV?

对着背影说爱祢 提交于 2019-12-04 17:04:29
I've heard that if you need to do a color segmentation on your software (create a binary image from a colored image by setting pixels to 1 if they meet certain threshold rules like R<100, G>100, 10< B < 123) it is better to first convert your image to HSV. Is this really true? And why? The big reason is that it separates color information (chroma) from intensity or lighting (luma). Because value is separated, you can construct a histogram or thresholding rules using only saturation and hue. This in theory will work regardless of lighting changes in the value channel. In practice it is just a

Detection of leaf on unpredictable background

一笑奈何 提交于 2019-12-04 15:56:37
问题 A project I have been working about for some time is a unsupervised leaf segmentation. The leaves are captured on a white or colored paper, and some of them has shadows. I want to be able to threshold the leaf and also remove the shadow (while reserving the leaf's details); however I cannot use fixed threshold values due to diseases changing the color of the leaf. Then, I begin to research and find out a proposal by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust

OpenCV : Using a Trimap image

无人久伴 提交于 2019-12-04 15:34:56
I found this dog and cat image dataset: The Oxford-IIIT Pet Dataset . Each image has a pixel level foreground-background segmentation (trimap) image. Searching the internet, I saw that trimap is an image with three colors (one for the background, one for the foreground and one for the not-classified region), but here the image is all black. Is it a mistake or is it correct? But above all I want to know if and how you can use it to get, given a normal image, a new image with the cat or dog on a black background. Thanks. The trimaps look black because they only contain pixels values ranging from

Gap Filling Contours / Lines

淺唱寂寞╮ 提交于 2019-12-04 14:28:12
问题 I have the following image: and I would like to fill in its contours (i.e. I would like to gap fill the lines in this image). I have tried a morphological closing, but using a rectangular kernel of size 3x3 with 10 iterations does not fill in the entire border. I have also tried a 21x21 kernel with 1 iteration and also not had luck. UPDATE: I have tried this in OpenCV (Python) using: cv2.morphologyEx(img, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_RECT, (21,21))) and cv2