I\'m trying to create a simple weighted loss function.
Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the
In model.fit
the batch size is 32 by default, that's where this number is coming from. Here's what's happening:
In custom_loss_1
the tensor K.abs(y_true-y_pred)
has shape (batch_size=32, 5)
, while the numpy array weights
has shape (100, 5)
. This is an invalid multiplication, since the dimensions don't agree and broadcasting can't be applied.
In custom_loss_2
this problem doesn't exist because you're multiplying 2 tensors with the same shape (batch_size=32, 5)
.
In custom_loss_3
the problem is the same as in custom_loss_1
, because converting weights
into a Keras variable doesn't change their shape.
UPDATE: It seems you want to give a different weight to each element in each training sample, so the weights
array should have shape (100, 5)
indeed.
In this case, I would input your weights' array into your model and then use this tensor within the loss function:
import numpy as np
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
from functools import partial
def custom_loss_4(y_true, y_pred, weights):
return K.mean(K.abs(y_true - y_pred) * weights)
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5) * 0.01 + train_X
weights = np.random.randn(*train_X.shape)
input_layer = Input(shape=(5,))
weights_tensor = Input(shape=(5,))
out = Dense(5)(input_layer)
cl4 = partial(custom_loss_4, weights=weights_tensor)
model = Model([input_layer, weights_tensor], out)
model.compile('adam', cl4)
model.fit(x=[train_X, weights], y=train_Y, epochs=10)