image-segmentation

TypeError: Value passed to parameter 'input' has DataType float64 not in list of allowed values: float16, bfloat16, float32

我的梦境 提交于 2019-12-11 03:38:17
问题 I have read many questions similar to mine, but all of them are different with mine. for itr in xrange(MAX_ITERATION): train_images, train_annotations = train_dataset_reader.next_batch(batch_size) # train_images=tf.image.convert_image_dtype(train_images,np.float32) # train_annotations=tf.image.convert_image_dtype(train_annotations,np.float32) # print(train_images_.get_shape(),train_annotations_.get_shape()) # train_images=tf.cast(train_images,tf.float32) # train_images = tf.to_float(train

How to get region properties from image that is already labeled in OpenCV?

不羁的心 提交于 2019-12-11 03:08:51
问题 I am labeling images using the watershed algorithm in OpenCV (similar to this tutorial: https://docs.opencv.org/3.4/d3/db4/tutorial_py_watershed.html) such that at the end I obtain an array of labels where each region has an integer value corresponding to its label. Now, I want to obtain the coordinates of the bounding boxes and areas of each region. I know this is easily done with skimage.measure.regionprops() but for considerations of speed of execution I would like to achieve this without

Transfer Learning From a U-Net for Image Segmentation [Keras]

扶醉桌前 提交于 2019-12-11 00:16:27
问题 Just getting started with Conv Nets and trying out an image segmentation problem. I got my hands on 24 images and their masks for the dstl satellite image feature detection competition. (https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection/data) I thought I’d try to follow the tips here https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html but I’m stuck. I downloaded the pre-trained weights for ZF_UNET_224, the 2nd place winners’ approach

Extracting image region within boundary

廉价感情. 提交于 2019-12-10 17:19:37
问题 I have to do a project using 2D CT images and segment liver and tumor in it using Matlab(only). Initially i have to segment liver region alone. I use region growing for liver segmentation. It gets seed point as input. The output is an image with a boundary for liver region. Now I need the region that is surrounded by the boundary alone. My program has a main program and a regionGrowing.m function. As I'm a new user am not allowed to post images. If you do need images I will mail you. Kindly

How can i access JPEG COM segment in iOS?

痴心易碎 提交于 2019-12-10 16:22:26
问题 JPEG has many Marker Segment Levels, I want to read and write Comment marker segment level - COM (read/write). It needs low level programming. How can i access it in iOS ? References - http://help.accusoft.com/ImageGear/v18.1/Mac/IGDLL-10-05.html https://www.npmjs.com/package/jpeg-com-segment http://www.sno.phy.queensu.ca/~phil/exiftool/ 回答1: IOS allows you to open files. Read the JPEG file. Search the stream for the COM marker. Read the length. Read the data. It's basic [objective] C

OpenCV : Using a Trimap image

别说谁变了你拦得住时间么 提交于 2019-12-09 21:47:34
问题 I found this dog and cat image dataset: The Oxford-IIIT Pet Dataset. Each image has a pixel level foreground-background segmentation (trimap) image. Searching the internet, I saw that trimap is an image with three colors (one for the background, one for the foreground and one for the not-classified region), but here the image is all black. Is it a mistake or is it correct? But above all I want to know if and how you can use it to get, given a normal image, a new image with the cat or dog on a

A few questions about color segmentation with L*a*b*

≯℡__Kan透↙ 提交于 2019-12-09 21:39:37
问题 I am trying to identify the onset of red/yellow color above the inner cone in the following image with color segmentation. To do this, I implemented lab color segmentation, clear all close all %Plot image flame = imread('flamePic.JPG'); flame = imrotate(flame,270); figure(1), imshow(flame), title('Flame'); %Find color of small region load regioncoordinates; nColors = 6; sample_regions = false([size(flame,1) size(flame,2) nColors]); for count = 1:nColors sample_regions(:,:,count) = roipoly

Per pixel softmax for fully convolutional network

血红的双手。 提交于 2019-12-09 15:34:38
问题 I'm trying to implement something like a fully convolutional network, where the last convolution layer uses filter size 1x1 and outputs a 'score' tensor. The score tensor has shape [Batch, height, width, num_classes]. My question is, what function in tensorflow can apply softmax operation for each pixel, independent of other pixels. The tf.nn.softmax ops seems not for such purpose. If there is no such ops available, I guess I have to write one myself. Thanks! UPDATE: if I do have to implement

Detecting particular objects in the image i.e image segmentation with opencv

强颜欢笑 提交于 2019-12-09 13:20:43
问题 I have to select any particular object visible in my image on i-phone. Basically my project is to segment image objects on the basis of my touch. The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch. Is there any other method which would be more robust because I have to run it on video frames? I am using OpenCV and iphone for the project. PLease help if there is any other idea which has been implemented or is feasible

Why does one not use IOU for training?

99封情书 提交于 2019-12-09 11:12:19
问题 When people try to solve the task of semantic segmentation with CNN's they usually use a softmax-crossentropy loss during training (see Fully conv. - Long). But when it comes to comparing the performance of different approaches measures like intersection-over-union are reported. My question is why don't people train directly on the measure they want to optimize? Seems odd to me to train on some measure during training, but evaluate on another measure for benchmarks. I can see that the IOU has