loss-function

Resume training with different loss function

廉价感情. 提交于 2020-01-13 16:21:20
问题 I want to implement a two-step learning process where: 1) pre-train a model for a few epochs using the loss function loss_1 2) change the loss function to loss_2 and continue the training for fine-tuning Currently, my approach is: model.compile(optimizer=opt, loss=loss_1, metrics=['accuracy']) model.fit_generator(…) model.compile(optimizer=opt, loss=loss_2, metrics=['accuracy’]) model.fit_generator(…) Note that the optimizer remains the same, and only the loss function changes. I'd like to

Keras custom loss function dtype error

青春壹個敷衍的年華 提交于 2020-01-06 06:31:26
问题 I have a NN that has two identical CNN (similar to Siamese network), then merges the outputs, and intends to apply a custom loss function on the merged output, something like this: ----------------- ----------------- | input_a | | input_b | ----------------- ----------------- | base_network | | base_network | ------------------------------------------ | processed_a_b | ------------------------------------------ In my custom loss function, I need to break y vertically into two pieces, and then

Expected target size (50, 88), got torch.Size([50, 288, 88])

蓝咒 提交于 2020-01-06 06:19:16
问题 I am trying to train my neuronal network. Train in the model is correct, but I can't calculate loss. The output and the target have the same dimension. I had tried to use torch.stack, but I can't because the size of the each input is (252, x) where x is the same in the 252 elements, but is different for the others inputs. I use a custom Dataset: class MusicDataSet(Dataset): def __init__(self, transform=None): self.ms, self.target, self.tam = sd.cargarDatos() self.mean, self.std = self

How to deal with triplet loss when at time of input i have only two files i.e. at time of testing

拜拜、爱过 提交于 2020-01-06 04:41:05
问题 I am implementing a siamese network in which i know how to calculate triplet loss by picking anchor, positive and negative by dividing input in three parts(which is a handcrafted feature vector) and then calculating it at time of training. anchor_output = ... # shape [None, 128] positive_output = ... # shape [None, 128] negative_output = ... # shape [None, 128] d_pos = tf.reduce_sum(tf.square(anchor_output - positive_output), 1) d_neg = tf.reduce_sum(tf.square(anchor_output - negative_output)

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'

狂风中的少年 提交于 2020-01-04 07:47:12
问题 I want to implement the loss function defined here. I use fcn-VGG16 to obtain a map x, and add a activation layer.(x is the output of the fcn vgg16 net). And then just some operations to get extracted features. co_map = Activation('sigmoid')(x) #add mean values img = Lambda(AddMean, name = 'addmean')(img_input) #img map multiply img_o = Lambda(HighLight, name='highlightlayer1')([img, co_map]) img_b = Lambda(HighLight, name='highlightlayer2')([img, 1-co_map]) extractor = ResNet50(weights =

How to access sample weights in a Keras custom loss function supplied by a generator?

无人久伴 提交于 2020-01-02 07:44:22
问题 I have a generator function that infinitely cycles over some directories of images and outputs 3-tuples of batches the form [img1, img2], label, weight where img1 and img2 are batch_size x M x N x 3 tensors, and label and weight are each batch_size x 1 tensors. I provide this generator to the fit_generator function when training a model with Keras. For this model I have a custom cosine contrastive loss function, def cosine_constrastive_loss(y_true, y_pred): cosine_distance = 1 - y_pred margin

How does keras handle multiple losses?

馋奶兔 提交于 2019-12-31 08:45:08
问题 So my question is, if I have something like: model = Model(inputs = input, outputs = [y1,y2]) l1 = 0.5 l2 = 0.3 model.compile(loss = [loss1,loss2], loss_weights = [l1,l2], ...) What does keras do with the losses to obtain the final loss? Is it something like: final_loss = l1*loss1 + l2*loss2 Also, what does it mean during training? Is the loss2 only used to update the weights on layers where y2 comes from? Or is it used for all the model's layers? I'm pretty confused 回答1: From model

How to do point-wise categorical crossentropy loss in Keras?

风流意气都作罢 提交于 2019-12-30 18:27:04
问题 I have a network that produces a 4D output tensor where the value at each position in spatial dimensions (~pixel) is to be interpreted as the class probabilities for that position. In other words, the output is (num_batches, height, width, num_classes) . I have labels of the same size where the real class is coded as one-hot. I would like to calculate the categorical-crossentropy loss using this. Problem #1: The K.softmax function expects a 2D tensor (num_batches, num_classes) Problem #2 : I

Higher loss penalty for true non-zero predictions

此生再无相见时 提交于 2019-12-25 00:16:06
问题 I am building a deep regression network (CNN) to predict a (1000,1) target vector from images (7,11). The target usually consists of about 90 % zeros and only 10 % non-zero values. The distribution of (non-) zero values in the targets vary from sample to sample (i.e. there is no global class imbalance). Using mean sqaured error loss, this led to the network predicting only zeros, which I don't find surprising. My best guess is to write a custom loss function that penalizes errors regarding

Hinge loss function gradient w.r.t. input prediction

人盡茶涼 提交于 2019-12-24 18:48:59
问题 For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked. Any help, hints, suggestions will be much appreciated! Here is the analytical expression for Hinge loss function itself: And here is my Hinge loss function implementation: def hinge_forward(target_pred,