loss

Higher loss penalty for true non-zero predictions

此生再无相见时 提交于 2019-12-25 00:16:06
问题 I am building a deep regression network (CNN) to predict a (1000,1) target vector from images (7,11). The target usually consists of about 90 % zeros and only 10 % non-zero values. The distribution of (non-) zero values in the targets vary from sample to sample (i.e. there is no global class imbalance). Using mean sqaured error loss, this led to the network predicting only zeros, which I don't find surprising. My best guess is to write a custom loss function that penalizes errors regarding

Comparing MSE loss and cross-entropy loss in terms of convergence

半世苍凉 提交于 2019-12-24 10:38:40
问题 For a very simple classification problem where I have a target vector [0,0,0,....0] and a prediction vector [0,0.1,0.2,....1] would cross-entropy loss converge better/faster or would MSE loss? When I plot them it seems to me that MSE loss has a lower error margin. Why would that be? Or for example when I have the target as [1,1,1,1....1] I get the following: 回答1: You sound a little confused... Comparing the values of MSE & cross-entropy loss and saying that one is lower than the other is like

Loss layer on Keras using two input layers and numpy operations

我是研究僧i 提交于 2019-12-24 02:43:06
问题 I have a loss function implemented that uses numpy and opencv methods. This function also uses the input image and the output of the network. Is it possible to convert the input and the output layers to numpy arrays, compute the loss and use it to optimize the network? 回答1: No, gradients are needed to perform gradient descent, so if you only have a numerical loss, it cannot be differentiated, in contrast to a symbolic loss that is required by Keras. Your only chance is to implement your loss

Loss layer on Keras using two input layers and numpy operations

一世执手 提交于 2019-12-24 02:42:37
问题 I have a loss function implemented that uses numpy and opencv methods. This function also uses the input image and the output of the network. Is it possible to convert the input and the output layers to numpy arrays, compute the loss and use it to optimize the network? 回答1: No, gradients are needed to perform gradient descent, so if you only have a numerical loss, it cannot be differentiated, in contrast to a symbolic loss that is required by Keras. Your only chance is to implement your loss

Use different optimizers depending on a if statement in TENSORFLOW

♀尐吖头ヾ 提交于 2019-12-24 01:47:24
问题 I'm currently trying to implement a neural network with two training steps. First i want to reduce the loss_first_part function and then i want to reduce the loss_second_part. tf.global_variable_initializer().run() for epoch in range(nb_epochs) if epoch < 10 : train_step = optimizer.minimize(loss_first_part) else : train_step = optimizer.minimize(loss_second_part) The problem is that the initializer should be defined after the optimizer.minimize call . Indeed i've the following error

What fast loss convergence indicates on a CNN?

主宰稳场 提交于 2019-12-23 22:20:18
问题 I'm training two CNNs (AlexNet e GoogLeNet) in two differents DL libraries (Caffe e Tensorflow). The networks was implemented by dev teams of each libraries (here and here) I reduced the original Imagenet dataset to 1024 images of 1 category -- but setted 1000 categories to classify on the networks. So I trained the CNNs, varying processing unit (CPU/GPU) and batches sizes, and I observed that the losses converges fastly to near zero (in mostly times before 1 epoch be completed), like in this

Loss in Tensorflow suddenly turn into nan

﹥>﹥吖頭↗ 提交于 2019-12-22 09:54:43
问题 When I using tensorflow, the loss suddenly turn into nan, just like: Epoch: 00001 || cost= 0.675003929 Epoch: 00002 || cost= 0.237375346 Epoch: 00003 || cost= 0.204962473 Epoch: 00004 || cost= 0.191322120 Epoch: 00005 || cost= 0.181427178 Epoch: 00006 || cost= 0.172107664 Epoch: 00007 || cost= 0.171604740 Epoch: 00008 || cost= 0.160334495 Epoch: 00009 || cost= 0.151639721 Epoch: 00010 || cost= 0.149983061 Epoch: 00011 || cost= 0.145890004 Epoch: 00012 || cost= 0.141182279 Epoch: 00013 || cost

Android: 3G to WIFI switch while in the middle on the app = loss of network connectivity

可紊 提交于 2019-12-22 07:11:29
问题 I am running into a annoying problem with HTC Legend (Android 2.2). Not seeing this issue on Xperia, Galaxy, Nexus, etc. When I launch my app on a 3G connection, fetch some data, then go into phone Settings and enable WIFI, the phone automatically obtains a WIFI connection which is favoured over 3G. The trouble is, once i switch back to the app, it appears to have lost all network connectivty and unable to connect to anything. However, other apps, like Web Browser for example, have no problem

Loss does not decrease during training (Word2Vec, Gensim)

*爱你&永不变心* 提交于 2019-12-22 00:26:27
问题 What can cause loss from model.get_latest_training_loss() increase on each epoch? Code, used for training: class EpochSaver(CallbackAny2Vec): '''Callback to save model after each epoch and show training parameters ''' def __init__(self, savedir): self.savedir = savedir self.epoch = 0 os.makedirs(self.savedir, exist_ok=True) def on_epoch_end(self, model): savepath = os.path.join(self.savedir, "model_neg{}_epoch.gz".format(self.epoch)) model.save(savepath) print( "Epoch saved: {}".format(self

loss calculation over different batch sizes in keras

℡╲_俬逩灬. 提交于 2019-12-20 02:32:44
问题 I know that in theory, the loss of a network over a batch is just the sum of all the individual losses. This is reflected in the Keras code for calculating total loss. Relevantly: for i in range(len(self.outputs)): if i in skip_target_indices: continue y_true = self.targets[i] y_pred = self.outputs[i] weighted_loss = weighted_losses[i] sample_weight = sample_weights[i] mask = masks[i] loss_weight = loss_weights_list[i] with K.name_scope(self.output_names[i] + '_loss'): output_loss = weighted