loss

Dice loss becomes NAN after some epochs

巧了我就是萌 提交于 2020-06-17 15:50:24
问题 I am working on an image-segmentation application where the loss function is Dice loss. The issue is the the loss function becomes NAN after some epochs. I am doing 5-fold cross validation and checking validation and training losses for each fold. For some folds, the loss quickly becomes NAN and for some folds, it takes a while to reach it to NAN. I have inserted a constant in loss function formulation to avoid over/under-flow but still it the same problem occurs. My inputs are scaled within

Input dimension for CrossEntropy Loss in PyTorch

空扰寡人 提交于 2020-05-15 21:21:13
问题 For a binary classification problem with batch_size = 1 , I have logit and label values using which I need to calculate loss. logit: tensor([0.1198, 0.1911], device='cuda:0', grad_fn=<AddBackward0>) label: tensor(1], device='cuda:0') # calculate loss loss_criterion = nn.CrossEntropyLoss() loss_criterion.cuda() loss = loss_criterion( b_logits, b_labels ) However, this always results in the following error, IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) What

Input dimension for CrossEntropy Loss in PyTorch

北慕城南 提交于 2020-05-15 21:20:37
问题 For a binary classification problem with batch_size = 1 , I have logit and label values using which I need to calculate loss. logit: tensor([0.1198, 0.1911], device='cuda:0', grad_fn=<AddBackward0>) label: tensor(1], device='cuda:0') # calculate loss loss_criterion = nn.CrossEntropyLoss() loss_criterion.cuda() loss = loss_criterion( b_logits, b_labels ) However, this always results in the following error, IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) What

Change loss function dynamically during training in Keras, without recompiling other model properties like optimizer

点点圈 提交于 2020-05-13 02:02:52
问题 Is it possible to set model.loss in a callback without re-compiling model.compile(...) after (since then the optimizer states are reset), and just recompiling model.loss , like for example: class NewCallback(Callback): def __init__(self): super(NewCallback,self).__init__() def on_epoch_end(self, epoch, logs={}): self.model.loss=[loss_wrapper(t_change, current_epoch=epoch)] self.model.compile_only_loss() # is there a version or hack of # model.compile(...) like this? To expand more with

Change loss function dynamically during training in Keras, without recompiling other model properties like optimizer

末鹿安然 提交于 2020-05-13 02:01:07
问题 Is it possible to set model.loss in a callback without re-compiling model.compile(...) after (since then the optimizer states are reset), and just recompiling model.loss , like for example: class NewCallback(Callback): def __init__(self): super(NewCallback,self).__init__() def on_epoch_end(self, epoch, logs={}): self.model.loss=[loss_wrapper(t_change, current_epoch=epoch)] self.model.compile_only_loss() # is there a version or hack of # model.compile(...) like this? To expand more with

Comparing AUC, log loss and accuracy scores between models

﹥>﹥吖頭↗ 提交于 2020-04-16 05:08:05
问题 I have the following evaluation metrics on the test set , after running 6 models for a binary classification problem : accuracy logloss AUC 1 19% 0.45 0.54 2 67% 0.62 0.67 3 66% 0.63 0.68 4 67% 0.62 0.66 5 63% 0.61 0.66 6 65% 0.68 0.42 I have the following questions: How can model 1 be the best in terms of logloss (the logloss is the closest to 0) since it performs the worst (in terms of accuracy ). What does that mean ? How come does model 6 have lower AUC score than e.g. model 5 , when

how to change softmaxlayer with regression in matconvnet

喜你入骨 提交于 2020-01-05 07:33:37
问题 I am trying to train MNIST data set with single output. It means when i give an 28*28 input (image) the model gives us a just number. For example i give '5', the model give me as a result 4.9,5, 5.002 or close to 5. So I have red some documents. People tells softmaxlayer have to be changed with regression layer. For doing do this. I am using matconvnet library and its mnist example. I have changed my network and written regression layer loss function. these are my codes: net.layers = {} ; net

how to change softmaxlayer with regression in matconvnet

无人久伴 提交于 2020-01-05 07:33:21
问题 I am trying to train MNIST data set with single output. It means when i give an 28*28 input (image) the model gives us a just number. For example i give '5', the model give me as a result 4.9,5, 5.002 or close to 5. So I have red some documents. People tells softmaxlayer have to be changed with regression layer. For doing do this. I am using matconvnet library and its mnist example. I have changed my network and written regression layer loss function. these are my codes: net.layers = {} ; net