image-segmentation

Extracting keyframes | Python | Opencv

喜欢而已 提交于 2019-12-09 02:05:28
问题 I am currently working on keyframe extraction from videos. Code : while success: success, currentFrame = vidcap.read() isDuplicate = False limit = count if count <= 10 else (count - 10) for img in xrange(limit, count): previusFrame = cv2.imread("%sframe-%d.png" % (outputDir, img)) try: difference = cv2.subtract(currentFrame, previusFrame) except: pass This gives me huge amounts of frames. Expected ouput: Calculate pixel difference between frames and then compare it with a threshold value and

Segment pixels in an image based on colour (Matlab)

拜拜、爱过 提交于 2019-12-08 20:21:41
问题 I'm trying to segment an image containing multiple lego bricks using colour information only (for now). The aim is to find lego bricks that e.g. are green. I have tried using k-means clustering, but the number of different coloured bricks present in a given varies. I have also tried using the following example from the Matlab website but that wasn't successful. Is there a simple way of segmenting based on colour? An example image for the problem: 回答1: So RGB or LAB colorspaces aren't really

Extract Characters using convex Hull coordinates - opencv - python

丶灬走出姿态 提交于 2019-12-08 19:39:28
问题 I have character images like this: Using following code I could get contours and convex hull, then I could draw convex for each character. import cv2 img = cv2.imread('test.png', -1) ret, threshed_img = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 127, 255, cv2.THRESH_BINARY) image, contours, hier = cv2.findContours(threshed_img, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) for cnt in contours: # get convex hull hull = cv2.convexHull(cnt) cv2.drawContours(img, [hull], -1, (0, 0, 255), 1)

How to match histogram of set of medical images?

ε祈祈猫儿з 提交于 2019-12-08 14:11:01
问题 I have medical images of 100 patients (100 stacks of MRIs) and I want to do histogram matching on them using this Matlab function B = imhistmatch(A,ref) How can I choose ref patient MRI (as reference volume)? since the number of slices between patients varies, how can I perform histogram matching? I am hoping someone here can recommend a solution that explains the process in a way that I can use or share any code. Thanks 来源: https://stackoverflow.com/questions/43741287/how-to-match-histogram

Unet segmentation model predicts blank image? [duplicate]

大憨熊 提交于 2019-12-08 14:09:07
问题 This question already has an answer here : U-net low contrast test images, predict output is grey box (1 answer) Closed 10 days ago . I m using Unet architecture for lung segmentation it show me better training and Val loss but when I call predict function and give one image of training set as input.its gives me blank image as output .I am understanding why is doing so when it show good validation accuracy. I'm using keras 回答1: Accuracy is not a good metrics for segmentation, especially for

IoU for semantic segmentation implementation in python/caffe per class

若如初见. 提交于 2019-12-08 04:48:20
问题 Is there any recommendable per class IoU(intersection over union) per pixel accuracy(different from bounding box) implementation.I am using caffe and managed to get the mean IoU but i am having difficulty in doing IoU for per class accuracy.I would appreciate a lot if someone could point out a good implementation in any language 回答1: so far this the only close semantic segmentation with multiple pixel label i ve seen so far here 来源: https://stackoverflow.com/questions/44041096/iou-for

Sharpen the edges

半世苍凉 提交于 2019-12-08 04:00:53
问题 I am trying to detect the biggest/larger rectangular shape and draw bounding box to the detected area. I have tried different way to detect perfect edge ( Edges with no holes) for contour detection. I searched on stackoverflow and the solution proposed OpenCV sharpen the edges (edges with no holes) and Segmentation Edges did not work with my sample image. I would like to detect biggest/larger rectangular shape on the following two images Original Image 1 and Original Image 2 Below is the code

Remove small chunks of labels in an image

拥有回忆 提交于 2019-12-08 00:25:47
问题 I am new in MATLAB also in image processing, I am trying to locate a person frame by frame. so far I have labeled the cropped image (cropped using PeopleDetector) like this, now if I locate exact location of person like i.e. at which pixel location '1' start and end (I know this is not right logic). All I want is to remove little chunks of white pixels at the right side of the person. I dont know how to do that. please suggest me. 回答1: You can use bwareaopen: bwareaopen(A, P) This removes all

Floodfill segmented image in numpy/python

一世执手 提交于 2019-12-07 09:49:08
问题 I have a numpy array which represents a segmented 2-dimensional matrix from an image. Basically, it's a sparse matrix with a bunch of closed shapes that are the outlines of the segments of the image. What I need to do is colorize the empty pixels within each closed shape with a different color/label in numpy. I know I could do this with floodfill in PIL but I'm trying not to have to convert the matrix back and forth from numpy to PIL. It would be nice if there was a function in someting like

from_logits=True and from_logits=False get different training result for tf.losses.CategoricalCrossentropy for UNet

柔情痞子 提交于 2019-12-07 03:13:16
问题 I am doing the image semantic segmentation job with unet, if I set the Softmax Activation for last layer like this: ... conv9 = Conv2D(n_classes, (3,3), padding = 'same')(conv9) conv10 = (Activation('softmax'))(conv9) model = Model(inputs, conv10) return model ... and then using loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False) The training will not converge even for only one training image. But if I do not set the Softmax Activation for last layer like this: ... conv9 =