loss-function

custom keras loss function for counting non zero or one values

痴心易碎 提交于 2019-12-24 10:24:37
问题 I have a cost function in Keras which has 3 parts related to the different output of my network. suppose this is my loss function: aL1+bL2+cL3 that L1 is mse, L2 is binary cross-entropy and L3 try to make minimum the number of pixels in the output that does not have value 0 or 1(∑n (x≠0 or x≠1)) , but I do not know how can I make the last loss function?!( a,b and c are coefficients for each loss functions) The output should be a 28x28 binary image which values are 0 or 1. by adding this term

PyTorch: Loss remains constant

我们两清 提交于 2019-12-24 06:37:11
问题 I've written a code in PyTorch with my own implemented loss function focal_loss_fixed . But my loss value stays fixed after every epoch. Looks like weights are not being updated. Here is my code snippet: optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9, weight_decay=0.0005) for epoch in T(range(20)): net.train() epoch_loss = 0 for n in range(len(x_train)//batch_size): (imgs, true_masks) = data_gen_small(x_train, y_train, iter_num=n, batch_size=batch_size) temp = [] for tt in true

Image segmentation - custom loss function in Keras

℡╲_俬逩灬. 提交于 2019-12-23 19:50:15
问题 I am using a in Keras implemented U-Net (https://arxiv.org/pdf/1505.04597.pdf) to segment cell organelles in microscopy images. In order for my network to recognize multiple single objects that are separated by only 1 pixel, I want to use weight maps for each label image (formula is given in publication). As far as I know, I have to create my own custom loss function (in my case crossentropy) to make use of these weight maps. However, the custom loss function only takes in two parameters. How

Custom loss in Keras with softmax to one-hot

谁都会走 提交于 2019-12-23 05:09:18
问题 I have a model that outputs a Softmax, and I would like to develop a custom loss function. The desired behaviour would be: 1) Softmax to one-hot (normally I do numpy.argmax(softmax_vector) and set that index to 1 in a null vector, but this is not allowed in a loss function). 2) Multiply the resulting one-hot vector by my embedding matrix to get an embedding vector (in my context: the word-vector that is associated to a given word, where words have been tokenized and assigned to indices, or

Custom Objective Function Keras

痴心易碎 提交于 2019-12-23 02:44:49
问题 I need to define my own loss function, I am using GAN model and my loss will include both adverserial loss and L1 loss between true and generated images. I tried to write a function but the following error: ValueError: ('Could not interpret loss function identifier:', Elemwise{add,no_inplace}.0) My loss function is: def loss_function(y_true, y_pred, y_true1, y_pred1): bce=0 for i in range (64): a = y_pred1[i] b = y_true1[i] x = K.log(a) bce=bce-x bce/=64 print('bce = ', bce) for i in zip( y

Which loss-function is better than MSE in temperature prediction?

谁说我不能喝 提交于 2019-12-22 11:36:15
问题 I have a feature vector size of 1x4098. Each feature vector corresponds to a float number (temperature). In training, I have 10.000 samples. Hence, I have training set size of 10000x4098 and the label of 10000x1. I want to use linear regression model to predict temperature from training data. i am using 3 hidden layers (512, 128, 32) with MSE loss. However, I only got 80% accuracy using tensorflow. Could you suggest to me others loss function to get better performance? 回答1: Let me give a

How to iterate through tensors in custom loss function?

核能气质少年 提交于 2019-12-20 11:37:50
问题 I'm using keras with tensorflow backend. My goal is to query the batchsize of the current batch in a custom loss function. This is needed to compute values of the custom loss functions which depend on the index of particular observations. I like to make this clearer given the minimum reproducible examples below. (BTW: Of course I could use the batch size defined for the training procedure and plugin it's value when defining the custom loss function, but there are some reasons why this can

How to iterate through tensors in custom loss function?

自闭症网瘾萝莉.ら 提交于 2019-12-20 11:36:22
问题 I'm using keras with tensorflow backend. My goal is to query the batchsize of the current batch in a custom loss function. This is needed to compute values of the custom loss functions which depend on the index of particular observations. I like to make this clearer given the minimum reproducible examples below. (BTW: Of course I could use the batch size defined for the training procedure and plugin it's value when defining the custom loss function, but there are some reasons why this can

Weighted mse custom loss function in keras

巧了我就是萌 提交于 2019-12-20 10:11:11
问题 I'm working with time series data, outputting 60 predicted days ahead. I'm currently using mean squared error as my loss function and the results are bad I want to implement a weighted mean squared error such that the early outputs are much more important than later ones. Weighted Mean Square Root formula: So I need some way to iterate over a tensor's elements, with an index (since I need to iterate over both the predicted and the true values at the same time, then write the results to a

Custom weighted loss function in Keras for weighing each element

你说的曾经没有我的故事 提交于 2019-12-20 08:30:10
问题 I'm trying to create a simple weighted loss function. Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the same dimension. Something like the following: import numpy as np train_X = np.random.randn(100, 5) train_Y = np.random.randn(100, 5)*0.01 + train_X weights = np.random.randn(*train_X.shape) Defining the custom loss function def custom_loss_1(y_true, y_pred): return K.mean(K.abs(y_true-y_pred)*weights) Defining the model from keras