image-segmentation

Channel wise CrossEntropyLoss for image segmentation in pytorch

人走茶凉 提交于 2020-07-05 12:08:45
问题 I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross

Channel wise CrossEntropyLoss for image segmentation in pytorch

时光总嘲笑我的痴心妄想 提交于 2020-07-05 12:08:14
问题 I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross

Dice loss becomes NAN after some epochs

巧了我就是萌 提交于 2020-06-17 15:50:24
问题 I am working on an image-segmentation application where the loss function is Dice loss. The issue is the the loss function becomes NAN after some epochs. I am doing 5-fold cross validation and checking validation and training losses for each fold. For some folds, the loss quickly becomes NAN and for some folds, it takes a while to reach it to NAN. I have inserted a constant in loss function formulation to avoid over/under-flow but still it the same problem occurs. My inputs are scaled within

Keras data augmentaion changes pixel values for masks (segmentation)

十年热恋 提交于 2020-06-16 07:48:11
问题 Iam using runtime data augmentation using generators in keras for segmentation problem.. Here is my data generator data_gen_args = dict( width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2, horizontal_flip=True, validation_split=0.2 ) image_datagen = ImageDataGenerator(**data_gen_args) def generate_data_generator(generator, Xi, Yi): genXi = generator.flow(Xi, seed=7, batch_size=32) genYi = generator.flow(Yi, seed=7,batch_size=32) while True: Xi = genXi.next() Yi = genYi.next() print

Clarification regarding BraTS dataset

[亡魂溺海] 提交于 2020-06-01 05:08:17
问题 I downloaded the BraTS dataset for my summer project. The dataset consisted of nii.gz files which I was able to open using nibabel library in Python. I used the following code: import os import numpy as np import nibabel as nib import matplotlib.pyplot as plat examplefile=os.path.join("mydatapath","BraTS19_2013_5_1_flair.nii.gz") img=nib.load(examplefile) print(img) this gave me the following output: <class 'nibabel.nifti1.Nifti1Image'> data shape (240, 240, 155) affine: [[ -1. 0. 0. -0.] [ 0

Index Error using sample_weight to initialize model weights

半世苍凉 提交于 2020-04-18 05:47:31
问题 I'm working on multi-class segmentation using Keras and U-net. I have as output of my NN 12 classes using soft max Activation function. the shape of my output is (N,288,288,12). to fit my model I use sparse_categorical_crossentropy. I want to initialize weights of my model for my unbalanced dataset. The right answer to my question can be found here : How to initialize sample weights for multi-class segmentation? I tried to implement it on my code and replace y by my training mask Y_train

Index Error using sample_weight to initialize model weights

人盡茶涼 提交于 2020-04-18 05:47:29
问题 I'm working on multi-class segmentation using Keras and U-net. I have as output of my NN 12 classes using soft max Activation function. the shape of my output is (N,288,288,12). to fit my model I use sparse_categorical_crossentropy. I want to initialize weights of my model for my unbalanced dataset. The right answer to my question can be found here : How to initialize sample weights for multi-class segmentation? I tried to implement it on my code and replace y by my training mask Y_train

Wrong predicted images size from predict_generator for multi label Unet

北城以北 提交于 2020-04-17 21:13:34
问题 Im using aerial images to segment road and centerline using multi label u-net, my test generator is look like this def testGenerator(test_path= "data\\membrane\\test\\image",num_image = 1584,target_size = (224,224),flag_multi_class = False,as_gray = False): for i in range(num_image): img = io.imread(os.path.join(test_path,"%d.jpg"%i),as_gray = as_gray) img = img / 255. img = trans.resize(img,target_size) img = np.reshape(img,img.shape) if (not flag_multi_class) else img img = np.reshape(img,

Wrong predicted images size from predict_generator for multi label Unet

拜拜、爱过 提交于 2020-04-17 21:12:01
问题 Im using aerial images to segment road and centerline using multi label u-net, my test generator is look like this def testGenerator(test_path= "data\\membrane\\test\\image",num_image = 1584,target_size = (224,224),flag_multi_class = False,as_gray = False): for i in range(num_image): img = io.imread(os.path.join(test_path,"%d.jpg"%i),as_gray = as_gray) img = img / 255. img = trans.resize(img,target_size) img = np.reshape(img,img.shape) if (not flag_multi_class) else img img = np.reshape(img,

Cleaning image for OCR

可紊 提交于 2020-03-10 14:41:20
问题 I've been trying to clear images for OCR: (the lines) I need to remove these lines to sometimes further process the image and I'm getting pretty close but a lot of the time the threshold takes away too much from the text: copy = img.copy() blur = cv2.GaussianBlur(copy, (9,9), 0) thresh = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,30) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9)) dilate = cv2.dilate(thresh, kernel, iterations=2) cnts =