image-segmentation

How can I apply bounding box for a discontiguous region in Matlab

青春壹個敷衍的年華 提交于 2019-12-13 21:29:36
问题 I am trying to apply a bounding box for the discontiguous region in the sample below. I found something in Matlab help docs on regionprops but it did not explain anything about how to do it. I need the smallest Box that can contain all the blobs in the image. 回答1: By default, when introduced with a logical type input mask, regionprops automatically apply bwlabel to the mask and computes properties for each connected component of the input mask. In your case, this is not a desirable behavior,

Different image dimensions during training and testing time for FCNs

早过忘川 提交于 2019-12-13 13:36:53
问题 I am reading multiple conflicting Stackoverflow posts and I'm really confused on what the reality is. My question is the following. If I trained an FCN on 128x128x3 images, is it possible to feed an image of size 256x256x3 , or B) 128x128 , or C) neither since the inputs have to be the same during training and testing? Consider SO post #1. In this post, it suggests that the images have to be the same dimensions during input and output. This makes sense to me. SO post #2: In this post, it

My Image segmentation result map contains black lattice in in the white patch

China☆狼群 提交于 2019-12-13 13:27:26
问题 I'm doing an image segmentation with UNet-like CNN architecture by Pytorch 0.4.0.It mark foreground as 1 and background as 0 in the final segmentation result.I use a pre-trained VGG's feature extractor as my encoder, so I need to upsampling the encoder output many times.But the result shows a weird lattice parttern in the result like this: I suspect these different shape of black parttern were caused by the deconvolutional layers.It's said that deconv layer add (s-1) zeros between the input

Fill the circular paths in image [closed]

ⅰ亾dé卋堺 提交于 2019-12-13 10:27:33
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 5 months ago . I have many images consisting of circles(varying from 1 to 4) in each image. I am trying to get a clear circle images by filling the missed pixels along the circle path. I have tried Hough-transform, but its parameters are image specific: for each image I have to change the

how to extract each characters from a image?with using this code

天大地大妈咪最大 提交于 2019-12-13 10:00:37
问题 i have to extract each characters from a image here i am uploading the code it is segmenting the horizontal lines but not able to segment the each characters along with the horizontal line segmentation loop. some1 please help to correct the code this is the previous code: %%horizontal histogram H = sum(rotatedImage, 2); darkPixels = H < 100; % Threshold % label [labeledRegions, numberOfRegions] = bwlabel(darkPixels); fprintf('Number of regions = %d\n', numberOfRegions); % Find centroids

How to trace the surface area as well as smoothen a specific region in an image using MATLAB

♀尐吖头ヾ 提交于 2019-12-13 08:26:21
问题 I have an image with 6 colors each indicating a value. I had obtained an image as shown below. I need to smoothen the edges and then find out the area as well as the surface area of that region. The second image shows a black line drawn in the edges which indicates that I need to smoothen the edges in such a way. I had used segmentation to create a mask as shown in the third image, and then obtain a segmented image using the code following the image. I have used the following code for

OpenCV Watershed segmentation miss some objects

戏子无情 提交于 2019-12-13 05:58:56
问题 My code is the same as this tutorial. When I see the result image after using cv::watershed() , there is a object(upper-right) that I want to find out, but it's missing. There are indeed six marks in image after using cv::drawContours() . Is this normal because the inaccuracy of the watershed algorithm exist? Here is part of my code: Mat src = imread("result01.png"); Mat gray; cvtColor(src, gray, COLOR_BGR2GRAY); Mat thresh; threshold(gray, thresh, 0, 255, THRESH_BINARY | THRESH_OTSU); //

GrabCut reading mask from PNG file in OpenCV (C++)

被刻印的时光 ゝ 提交于 2019-12-13 04:24:39
问题 The implementation of this functionality seems pretty straightforward in Python, as shown here: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html Yet, when I tried to do exactly the same in C++, I get bad arguments error (for the grabcut function). How to put the mask image in the right format? I am a newbie at this, so I'd be very thankful if someone could help me understand better. Thank you! Here's what I have so far: Mat image; image= imread(file); Mat

Opencv4Android grabcut's output image has different colors(brighter colors) than the input image

自作多情 提交于 2019-12-13 02:51:57
问题 public class Grabcut extends Activity { ImageView iv; Bitmap bitmap; Canvas canvas; Scalar color = new Scalar(255, 0, 0, 255); Point tl, br; int counter; Bitmap bitmapResult, bitmapBackground; Mat dst = new Mat(); final String pathToImage = Environment.getExternalStorageDirectory()+"/gcut.png"; public static final String TAG = "Grabcut demo"; static { if (!OpenCVLoader.initDebug()) { // Handle initialization error } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate

Count Color objects in images

青春壹個敷衍的年華 提交于 2019-12-13 02:08:32
问题 I'm working in image segmentation, testing a lot of different segmentation algorithms, in order to do a comparitive study. At the moment i've implemented the mean shift algorithm. I would like to count the objects segmented in the image. In this images there are two types of objects, with different colors. I have a the manually counting done by specialists so i would like to compare the results. The result image is: There is any way for me to automize this process? Can you please help me out!