loss-function

Training of multi-output Keras model on a joint loss function

空扰寡人 提交于 2020-02-29 04:49:46
问题 I'm writing two joint decoders in Keras, with one common input, two separate outputs, and a loss function that takes both outputs into account. The problem that I have is with the loss function. Here is the minimal Keras code that you can reproduce the error: import tensorflow as tf from scat import * from keras.layers import Input, Reshape, Permute, Lambda, Flatten from keras.layers.core import Dense from keras.layers.advanced_activations import LeakyReLU from keras.models import Model from

Very large loss values when training multiple regression model in Keras

╄→尐↘猪︶ㄣ 提交于 2020-02-06 07:38:10
问题 I was trying to build a multiple regression model to predict housing prices using the following features: [bedrooms bathrooms sqft_living view grade] = [0.09375 0.266667 0.149582 0.0 0.6] I have standardized and scaled the features using sklearn.preprocessing.MinMaxScaler . I used Keras to build the model: def build_model(X_train): model = Sequential() model.add(Dense(5, activation = 'relu', input_shape = X_train.shape[1:])) model.add(Dense(1)) optimizer = Adam(lr = 0.001) model.compile(loss

Custom loss function in Keras to penalize false negatives

[亡魂溺海] 提交于 2020-02-01 02:22:47
问题 I am working on a medical dataset where I am trying to have as less false negatives as possible. A prediction of "disease when actually no disease" is okay for me but a prediction "no disease when actually a disease" is not. That is, I am okay with FP but not FN . After doing some research, I found out ways like Keeping higher learning rate for one class , using class weights , ensemble learning with specificity/sensitivity etc. I achieved the near desired result using class weights like

BCEWithLogitsLoss in Keras

蓝咒 提交于 2020-01-25 00:25:12
问题 How to implement BCEWithLogitsLoss in keras and use it as custom loss function while using Tensorflow as backend. I have used BCEWithLogitsLoss in PyTorch which was defined in torch . How to implement the same in Keras.? 回答1: In TensorFlow, you can directly call tf.nn.sigmoid_cross_entropy_with_logits which works both in TensorFlow 1.x and 2.0. If you want to stick to Keras API, use tf.losses.BinaryCrossentropy and set from_logits=True in the constructor call. Unlike PyTorch, there are not

Keras/Tensorflow: Combined Loss function for single output

南楼画角 提交于 2020-01-24 11:46:46
问题 I have only one output for my model, but I would like to combine two different loss functions: def get_model(): # create the model here model = Model(inputs=image, outputs=output) alpha = 0.2 model.compile(loss=[mse, gse], loss_weights=[1-alpha, alpha] , ...) but it complains that I need to have two outputs because I defined two losses: ValueError: When passing a list as loss, it should have one entry per model outputs. The model has 1 outputs, but you passed loss=[<function mse at

What function defines accuracy in Keras when the loss is mean squared error (MSE)?

时光总嘲笑我的痴心妄想 提交于 2020-01-18 02:22:35
问题 How is Accuracy defined when the loss function is mean square error? Is it mean absolute percentage error? The model I use has output activation linear and is compiled with loss= mean_squared_error model.add(Dense(1)) model.add(Activation('linear')) # number model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) and the output looks like this: Epoch 99/100 1000/1000 [==============================] - 687s 687ms/step - loss: 0.0463 - acc: 0.9689 - val_loss: 3.7303 -

What function defines accuracy in Keras when the loss is mean squared error (MSE)?

倖福魔咒の 提交于 2020-01-18 02:22:06
问题 How is Accuracy defined when the loss function is mean square error? Is it mean absolute percentage error? The model I use has output activation linear and is compiled with loss= mean_squared_error model.add(Dense(1)) model.add(Activation('linear')) # number model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) and the output looks like this: Epoch 99/100 1000/1000 [==============================] - 687s 687ms/step - loss: 0.0463 - acc: 0.9689 - val_loss: 3.7303 -

What function defines accuracy in Keras when the loss is mean squared error (MSE)?

北城余情 提交于 2020-01-18 02:22:03
问题 How is Accuracy defined when the loss function is mean square error? Is it mean absolute percentage error? The model I use has output activation linear and is compiled with loss= mean_squared_error model.add(Dense(1)) model.add(Activation('linear')) # number model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) and the output looks like this: Epoch 99/100 1000/1000 [==============================] - 687s 687ms/step - loss: 0.0463 - acc: 0.9689 - val_loss: 3.7303 -

I get an error while trying to customize my loss function

╄→尐↘猪︶ㄣ 提交于 2020-01-16 19:06:57
问题 I am trying to create a custom loss function for my deep learning model and I run into an error. I am going to give here an example of a code that is not what I want to use but if I understand how to make this little loss function work, then I think I'll be able to make my long loss function work. So I am pretty much asking for help to make this next function work, here it is. model.compile(optimizer='rmsprop',loss=try_loss(pic_try), metrics= ['accuracy']) def try_loss(pic): def try_2_loss(y

weighting true positives vs true negatives

柔情痞子 提交于 2020-01-16 15:51:07
问题 This loss function in tensorflow is used as a loss function in keras/tensorflow to weight binary decisions It weights false positives vs false negatives: targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits)) The argument pos_weight is used as a multiplier for the positive targets: targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits)) Does anybody have any suggestions how in addition true positives could be weighted against true