loss

How to get results from custom loss function in Keras?

南笙酒味 提交于 2019-12-14 03:41:22
问题 I want to implement a custom loss function in Python and It should work like this pseudocode: aux = | Real - Prediction | / Prediction errors = [] if aux <= 0.1: errors.append(0) elif aux > 0.1 & <= 0.15: errors.append(5/3) elif aux > 0.15 & <= 0.2: errors.append(5) else: errors.append(2000) return sum(errors) I started to define the metric like this: def custom_metric(y_true,y_pred): # y_true: res = K.abs((y_true-y_pred) / y_pred, axis = 1) .... But I do not know how to get the value of the

How to conditionally assign values to tensor [masking for loss function]?

故事扮演 提交于 2019-12-12 12:32:08
问题 I want to create a L2 loss function that ignores values (=> pixels) where the label has the value 0. The tensor batch[1] contains the labels while output is a tensor for the net output, both have a shape of (None,300,300,1) . labels_mask = tf.identity(batch[1]) labels_mask[labels_mask > 0] = 1 loss = tf.reduce_sum(tf.square((output-batch[1])*labels_mask))/tf.reduce_sum(labels_mask) My current code yields to TypeError: 'Tensor' object does not support item assignment (on the second line). What

How can I implement the Kullback-Leibler loss in TensorFlow?

此生再无相见时 提交于 2019-12-12 12:14:15
问题 I need to minimize KL loss in tensorflow . I tried this function tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None) , but I failed. I tried to implement it manually: def kl_divergence(p,q): return p* tf.log(p/q)+(1-p)*tf.log((1-p)/(1-q)) Is it correct? 回答1: What you have there is the cross entropy, KL divergence should be something like: def kl_divergence(p, q): return tf.reduce_sum(p * tf.log(p/q)) This assumes that p and q are both 1-D tensors of float, of the same

caffe training loss does not converge

佐手、 提交于 2019-12-12 04:22:58
问题 I'm getting the problem of non-converged training loss. (batchsize: 16, average loss:10). I have tried with the following methods + Vary the learning rate lr (initial lr = 0.002 cause very high loss, around e+10). Then with lr = e-6, the loss seem to small but do not converge. + Add initialization for bias + Add regularization for bias and weight This is the network structure and the training loss log Hope to hear from you Best regards 来源: https://stackoverflow.com/questions/41234297/caffe

Triplet loss on text embeddings with keras

≡放荡痞女 提交于 2019-12-11 16:08:28
问题 I'd start saying i'm quite new to Keras and machine learning in general. I'm trying to build an "experimental" model consisting of two parts: An "encoder" which takes a string (containing a long series of attributes, i'm using the DBLP-ACM dataset), builds an embedding of the words of this string (word2vec), and encodes them in a vector (bidirectional LSTM). A trainable model which takes 3 vectors in input (result of model 1) and uses the triplet loss as loss function (i already defined it,

python svm function with huber loss

假装没事ソ 提交于 2019-12-11 15:30:10
问题 I need a svm classifier of python with huber loss function. But its default loss function is hinge loss. Do you know how can I assign loss function to python svm? svc = svm.SVC(kernel='linear', C=1, gamma=1).fit(data, label) 回答1: There is really no such thing as "SVM with huber loss", as SVM is literally a linear (or kernelized) model trained with hinge loss. If you change the loss - it stops being SVM. Consequently libraries do not have a loss parameter, as changing it does not apply to the

defined loss function in tensorflow?

陌路散爱 提交于 2019-12-11 04:51:55
问题 In my project, the negative instance is far more than positive instance, so I want to give positive instance with a larger weight. my target is: loss = 0.0 if y_label==1:loss += 100 * cross_entropy else:loss += cross_entropy How to realizate this in tensorflow[?] 回答1: Let losses to be a vector (rank-1 tensor) of loss values for the examples in your batch. And let y be the the vector of corresponding labels. You could then achieve the result you want by weights = w_pos*y + w_neg*(1.0-y) loss =

Euclidean distance loss function for RNN (keras)

风格不统一 提交于 2019-12-10 18:59:01
问题 I want to set Euclidean distance as a loss function for LSTM or RNN. What output should such function have: float, (batch_size) or (batch_size, timesteps)? Model input X_train is (n_samples, timesteps, data_dim). Y_train has the same dimensions. Example code: def euc_dist_keras(x, y): return K.sqrt(K.sum(K.square(x - y), axis=-1, keepdims=True)) model = Sequential() model.add(SimpleRNN(n_units, activation='relu', input_shape=(timesteps, data_dim), return_sequences=True)) model.add(Dense(n

MySQL Dump Limit? MySQL overall database size limit?

旧城冷巷雨未停 提交于 2019-12-08 06:20:28
问题 Client just had ~1000 rows of data (most recent, of course), just go missing in one of their tables. Doing some forensics, I found that the "last_updated_date" in all of their other rows of said table was also set to roughly the same time as the deletion occurred. This is not one of their larger tables. Some other oddities are that the mysqldumps for the last week are all exact same size -- 10375605093 Bytes. Previous dumps grew by about .5GB each. MySQL Dump command is standard: /path/to

'attributeError: 'Tensor' object has no attribute '_keras_history' during implementing perceptual loss with pretrained VGG using keras

本秂侑毒 提交于 2019-12-08 05:07:50
问题 I'm trying to implement the VGG perceptual loss for a model training for video inputs. I implemented the perceptual loss like the recommendation in the question AttributeError: 'Tensor' object has no attribute '_keras_history': My mainModel looks like the following graph: Graph of mainModel The input size is (bathsize, frame_num, row, col, channel) . And I want to get the perceptual loss for the middle frame, that is, frame_num/2 . So, I implemented the following lossModel: lossModel = VGG19