image-segmentation

how to split image and fill different color

大兔子大兔子 提交于 2019-12-04 04:13:36
问题 As this picture, how to use matlab code to split it into different parts, and then fill color in it? In addition, how to set gradient color in the second code ??? Here is the picture segmentation code: clc rgb=imread('sample1.bmp'); bw=im2bw(rgb2gray(rgb),.8); bw=medfilt2(bw); planes=bwareaopen(bw,800); D=mat2gray(bwdist(imcomplement(planes))); stats=regionprops(D>.8,'Centroid'); planes_centroid=cat(1,stats.Centroid); planes_mask=false(size(bw)); planes_mask(sub2ind(size(bw),round(planes

Per pixel softmax for fully convolutional network

帅比萌擦擦* 提交于 2019-12-04 04:00:18
I'm trying to implement something like a fully convolutional network, where the last convolution layer uses filter size 1x1 and outputs a 'score' tensor. The score tensor has shape [Batch, height, width, num_classes]. My question is, what function in tensorflow can apply softmax operation for each pixel, independent of other pixels. The tf.nn.softmax ops seems not for such purpose. If there is no such ops available, I guess I have to write one myself. Thanks! UPDATE: if I do have to implement myself, I think I may need to reshape the input tensor to [N, num_claees] where N = Batch x width x

What is the difference between Keras model.evaluate() and model.predict()?

随声附和 提交于 2019-12-04 02:35:02
I used Keras biomedical image segmentation to segment brain neurons. I used model.evaluate() it gave me Dice coefficient: 0.916. However, when I used model.predict() , then loop through the predicted images by calculating the Dice coefficient, the Dice coefficient is 0.82. Why are these two values different? The problem lies in the fact that every metric in Keras is evaluated in a following manner: For each batch a metric value is evaluated. A current value of loss (after k batches is equal to a mean value of your metric across computed k batches). The final result is obtained as a mean of all

OpenCV - GrabCut with custom foreground/background models

僤鯓⒐⒋嵵緔 提交于 2019-12-03 20:28:08
问题 I want to use the GrabCut algorithm implemented on OpenCV. As shown in the documentation this is the function signature: void grabCut( InputArray img, InputOutputArray mask, Rect rect, InputOutputArray bgdModel, // * InputOutputArray fgdModel, // * int iterCount, int mode=GC_EVAL) The mode param, indicates how to initialize the algorithm, either with the rect (a rectangle bounding box) or with the mask (a matrix whose values correspond to user paintings of the foreground/background regions. I

opencv floor detection by segmentation

大兔子大兔子 提交于 2019-12-03 20:08:16
I'm working on a a way to detect the floor in an image. I'm trying to accomplish this by reducing the image to areas of color, and then assuming that the largest area is the floor. (We get to make some pretty extensive assumptions about the environment the robot will operate in) What I'm looking for is some recommendations on algorithms that would be suited to this problem. Any help would be greatly appreciated. Edit: specifically I am looking for an image segmentation algorithm that can reliably extract one area. Everything I've tried (mainly PyrSegmentation) seems to work by reducing the

How to test accuracy of segmentation algorithm?

ε祈祈猫儿з 提交于 2019-12-03 19:53:59
问题 I am dealing with a image classification problem. Before classification, images should be segmented. I tried several methods. My question is "how can i test accuracy of segmentation ?". I plan to compare final binary image with correct binary image based on pixel differences in order to get a success rate. Is there a more efficient way to compare edges of two binary image, instead of this? 回答1: Measuring the quality of image segmentation is a topic well studied in the computer vision

Vehicle segmentation and tracking

六月ゝ 毕业季﹏ 提交于 2019-12-03 16:26:43
I've been working on a project for some time, to detect and track (moving) vehicles in video captured from UAV's, currently I am using an SVM trained on bag-of-feature representations of local features extracted from vehicle and background images. I am then using a sliding window detection approach to try and localise vehicles in the images, which I would then like to track. The problem is that this approach is far to slow and my detector isn't as reliable as I would like so I'm getting quite a few false positives. So I have been considering attempting to segment the cars from the background

How to load Image Masks (Labels) for Image Segmentation in Keras

心不动则不痛 提交于 2019-12-03 15:32:29
I am using Tensorflow as a backend to Keras and I am trying to understand how to bring in my labels for image segmentation training. I am using the LFW Parts Dataset which has both the ground truth image and the ground truth mask which looks like this * 1500 training images: As I understand the process, during training, I load both the (X) Image (Y) Mask Image Doing this in batches to meet my needs. Now my question is, is it sufficient to just load them both (Image and Mask Image) as NumPy arrays (N, N, 3) or do I need to process/reshape the Mask image in some way. Effectively, the mask/labels

Why does one not use IOU for training?

ぃ、小莉子 提交于 2019-12-03 13:52:12
When people try to solve the task of semantic segmentation with CNN's they usually use a softmax-crossentropy loss during training (see Fully conv. - Long ). But when it comes to comparing the performance of different approaches measures like intersection-over-union are reported. My question is why don't people train directly on the measure they want to optimize? Seems odd to me to train on some measure during training, but evaluate on another measure for benchmarks. I can see that the IOU has problems for training samples, where the class is not present (union=0 and intersection=0 => division

Spectral Clustering, Image Segmentation and Eigenvectors

百般思念 提交于 2019-12-03 13:51:23
问题 Based on the book Computer Vision a Modern Approach page 425, I attempted to use eigenvectors for image segmentation. http://dl.dropbox.com/u/1570604/tmp/comp-vis-modern-segment.pdf The author mentions that image pixel affinites can be captured in matrix A. Then we can maximize w^T A w product where w's are weights. After some algebra one obtains Aw = \lambda w, finding w is like finding eigenvectors. Then finding the best cluster is finding the eigenvalue with largest eigenvector, the values