image-segmentation

How to use OpenCV to remove non text areas from a business card? [closed]

∥☆過路亽.° 提交于 2019-12-28 12:46:30
问题 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 7 years ago . my target is to remove any non text area from a scanned business card image but i don't know the steps to perform that using OpenCV , i have followed this steps but don't know this is the right one or not also i

Working with erosion and dilatation

倾然丶 夕夏残阳落幕 提交于 2019-12-25 18:47:16
问题 From the the previous link: Working with an specific region generated by BoundingBox The following code is based on it se = strel('disk',9); p_mask=imerode(Ic(BB,1).Image,se); k_mask=imdilate(p_mask,se); Ipointer=I2.*repmat( k_mask , [1 1 3]); figure,imshow(Ipointer) Mch=Ic(BB,1).Image-k_mask; Mbch=bwareaopen(Mch,3000); Ichaplet=I2.*repmat( Mbch , [1 1 3]); figure,imshow(Ichaplet) And so, I do not understand it 回答1: google is your friend. if you don't know what a function does, google matlab

Python - Perceptually detect the same object on two images - and selecting them

烂漫一生 提交于 2019-12-24 23:33:55
问题 I have some challenge and I try find any information, tips, examples which help me do that. First I looking many times google, and this forum with different ask but I don't found any this same task, algorithm. I try many commercial program to compare images, to find diffrent and common parts but all is don't do that good and smart. I have some website with many different boxes, modules, elements etc. Now I do do first printscreen, save this image as web1.png. Next step I change some boxes,

Calculator 7 Segment Display w/ Width [closed]

半腔热情 提交于 2019-12-24 14:43:05
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . I need to write a program to make a seven segment display. It will work like this: def numbers(number, width): # Code to make number A typical output could look something like this: numbers(100, 2) -- -- | | | | | | | | | | | | | | | | | | | | -- -- numbers(24, 1) - | | | - - | | - numbers(1234567890, 1) - - - -

tensorflow mean iou for just foreground class for binary semantic segmentation

不想你离开。 提交于 2019-12-24 14:13:34
问题 tensorflow.metrics.mean_iou() currently averages over the iou of each class. I want to get the iou of only foreground in for my binary semantic segmentation problem. I tried using weights as tf.constant([0.0, 1.0]) but that tf.constant([0.01, 0.99]) but the mean_iou looks still overflowed as following: (500, 1024, 1024, 1) 119/5000 [..............................] - ETA: 4536s - loss: 0.3897 - mean_iou: -789716217654962048.0000 - acc: 0.9335 I am using this as metrics for keras fit_generator

Finding bright spots in a image using opencv

℡╲_俬逩灬. 提交于 2019-12-24 08:46:35
问题 I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages. I am new to

How to count spring coil turns?

心已入冬 提交于 2019-12-24 00:55:38
问题 In reference to: How to detect and count a spiral's turns I am not able to get count even in the pixel based calculation also. If I have attached image how to start with the counting the turns. I tried the FindContours(); but doesn't quite get the turns segregated which it can't. Also the matchshape() I have the similarity factor but for whole coil. So I tried as follows for turn count: public static int GetSpringTurnCount() { if (null == m_imageROIed) return -1; int imageWidth = m_imageROIed

How to find IoU from segmentation masks?

别说谁变了你拦得住时间么 提交于 2019-12-24 00:38:02
问题 I am doing an image segmentation task and I am using a dataset that only has ground truths but no bounding boxes or polygons. I have 2 classes( ignoring 0 for background) and the outputs and ground truth labels are in an array like Predicted--/---Labels 0|0|0|1|2 0|0|0|1|2 0|2|1|0|0 0|2|1|0|0 0|0|1|1|1 0|0|1|1|1 0|0|0|0|1 0|0|0|0|1 How do I calculate IoU from these ? PS: I am using python3 with pytorch api 回答1: So I just found out that jaccard_similarity_score is regarded as IoU. So the

Calculate and plot segmentation mask pixels

无人久伴 提交于 2019-12-23 22:36:31
问题 I have the following image: Below is a segmentation mask within this image: From the image above, I attempted to calculate the non-zero pixel coordinates. That is, I tried to get all of the pixels of the actual clouds that are in the mask above. When I plot these non-zero pixels, the results are this: My question is: why are the plotted pixels in the image above not the same as from the segmentation mask, and how do I fix this? I want to get the pixels of the clouds from the segmentation mask

Image segmentation - custom loss function in Keras

℡╲_俬逩灬. 提交于 2019-12-23 19:50:15
问题 I am using a in Keras implemented U-Net (https://arxiv.org/pdf/1505.04597.pdf) to segment cell organelles in microscopy images. In order for my network to recognize multiple single objects that are separated by only 1 pixel, I want to use weight maps for each label image (formula is given in publication). As far as I know, I have to create my own custom loss function (in my case crossentropy) to make use of these weight maps. However, the custom loss function only takes in two parameters. How