image-segmentation

How to implement multi-class semantic segmentation?

房东的猫 提交于 2019-12-03 03:17:55
I'm able to train a U-net with labeled images that have a binary classification. But I'm having a hard time figuring out how to configure the final layers in Keras/Theano for multi-class classification (4 classes). I have 634 images and corresponding 634 masks that are unit8 and 64 x 64 pixels. My masks, instead of being black (0) and white (1), have color labeled objects in 3 categories plus background as follows: black (0), background red (1), object class 1 green (2), object class 2 yellow (3), object class 3 Before training runs, the array containing masks is one-hot encoded as follows:

How to count the number of spots in this image?

梦想与她 提交于 2019-12-03 03:09:57
问题 I am trying to count the number of hairs transplanted in the following image. So practically, I have to count the number of spots I can find in the center of image. (I've uploaded the inverted image of a bald scalp on which new hairs have been transplanted because the original image is bloody and absolutely disgusting! To see the original non-inverted image click here. To see the larger version of the inverted image just click on it). Is there any known image processing algorithm to detect

Image Processing - Dress Segmentation using opencv

徘徊边缘 提交于 2019-12-02 23:23:40
I am working on dress feature identification using opencv. As a first step, I need to segment t-shirt by removing face and hands from the image. Any suggestion is appreciated. I suggest the following approach: Use Adrian Rosebrock's skin detection algorithm for detecting the skin (thank you for Rosa Gronchi for his comment). Use region growing algorithm on the variance map. The initial seed can be calculated by using stage 1(see the attached code for more information). code: %stage 1: skin detection - Adrian Rosebrock solution im = imread(<path to input image>); hsb = rgb2hsv(im)*255; skinMask

Image segmentation with maxflow

无人久伴 提交于 2019-12-02 21:31:09
I have to do a foreground/background segmentation using maxflow algorithm in C++. ( http://wiki.icub.org/iCub/contrib/dox/html/poeticon_2src_2objSeg_2src_2maxflow-v3_802_2maxflow_8cpp_source.html ). I get an array of pixels from a png file according to their RBG but what are the next steps. How could I use this algorithm for my problem? I recognize that source very well. That's the Boykov-Kolmogorov Graph Cuts library. What I would recommend you do first is read their paper . Graph Cuts is an interactive image segmentation algorithm. You mark pixels in your image on what you believe belong to

Segment an image using python and PIL to calculate centroid and rotations of multiple rectangular objects

醉酒当歌 提交于 2019-12-02 21:25:53
I am using python and PIL to find the centroid and rotation of various rectangles (and squares) in a 640x480 image, similar to this one So far my code works for a single rectangle in an image. import Image, math def find_centroid(im): width, height = im.size XX, YY, count = 0, 0, 0 for x in xrange(0, width, 1): for y in xrange(0, height, 1): if im.getpixel((x, y)) == 0: XX += x YY += y count += 1 return XX/count, YY/count #Top Left Vertex def find_vertex1(im): width, height = im.size for y in xrange(0, height, 1): for x in xrange (0, width, 1): if im.getpixel((x, y)) == 0: X1=x Y1=y return X1,

How to count the number of spots in this image?

北战南征 提交于 2019-12-02 17:41:19
I am trying to count the number of hairs transplanted in the following image. So practically, I have to count the number of spots I can find in the center of image. (I've uploaded the inverted image of a bald scalp on which new hairs have been transplanted because the original image is bloody and absolutely disgusting! To see the original non-inverted image click here . To see the larger version of the inverted image just click on it). Is there any known image processing algorithm to detect these spots? I've found out that the Circle Hough Transform algorithm can be used to find circles in an

How does input image size influence size and shape of fully connected layer?

假如想象 提交于 2019-12-02 14:09:04
I am reading a lot of tutorials that state two things. "[Replacing fully connected layers with convolutional layers] casts them into fully convolutional networks that take input of any size and output classification maps." Fully Convolutional Networks for Semantic Segmentation, Shelhamer et al. A traditional CNN can't do this because it has a fully connected layer and it's shape is decided by the input image size. Based on these statements, my questions are the following? Whenever I've made a FCN, I could only get it to work with a fixed dimension of input images for both training and testing.

Row by Row character extraction

吃可爱长大的小学妹 提交于 2019-12-02 05:30:38
问题 I am working on handwritten character recognition from input image. Here is the code which extracts characters from input image %% Label connected components [L Ne]=bwlabel(Ifill); disp(Ne); %% Measure properties of image regions propied=regionprops(L,'BoundingBox'); hold on %% Plot Bounding Box for n=1:size(propied,1) rectangle('Position',propied(n).BoundingBox,'EdgeColor','g','LineWidth',2) end hold off %% Characters being Extracted figure for n=1:Ne [r,c] = find(L==n); n1=imagen(min(r):max

License plate character segmentation python opencv

泄露秘密 提交于 2019-12-02 05:20:14
问题 I want to isolate every character in the following image: and it should create a rectangular bounding box around each character. My code is creating a circular bounding box. I need to supply these isolated character images to my trained model to predict the character. I haven't done image processing before which leads me to asking such a question. This is the code I'm using: # Standard imports import cv2 import numpy as np; from PIL import Image params = cv2.SimpleBlobDetector_Params() #

Is there any way to control the concatenation of the blockproc output?

你说的曾经没有我的故事 提交于 2019-12-02 03:51:05
问题 This is a follow up to the question: Overlapping sliding window over an image using blockproc or im2col? So by using the code : B = blockproc(A, [1 1], @block_fun, 'BorderSize', [2 2], 'TrimBorder', false, 'PadPartialBlocks', true) I was able to create an overlapping sliding window over my image and calculate the dct2 for each window. But the problem is that blockproc concatenates the output in a way that I cannot use. The output greatly depends on the block size and the size of the output