How to do point-wise categorical crossentropy loss in Keras?

前端 未结 4 1199
栀梦
栀梦 2021-01-18 02:26

I have a network that produces a 4D output tensor where the value at each position in spatial dimensions (~pixel) is to be interpreted as the class probabilities for that po

相关标签:
4条回答
  • 2021-01-18 03:01

    Just flatten the output to a 2D tensor of size (num_batches, height * width * num_classes). You can do this with the Flatten layer. Ensure that your y is flattened the same way (normally calling y = y.reshape((num_batches, height * width * num_classes)) is enough).

    For your second question, using categorical crossentropy over all width*height predictions is essentially the same as averaging the categorical crossentropy for each width*height predictions (by the definition of categorical crossentropy).

    0 讨论(0)
  • 2021-01-18 03:02

    It seems that now you can simply do softmax activation on the last Conv2D layer and then specify categorical_crossentropy loss and train on the image without any reshaping tricks or any new loss function. I've tried overfitting with a dummy dataset and it works well. Try it ~ !

    inp = keras.Input(...)
    # define your model here
    out = keras.layers.Conv2D(classes, (1, 1), activation='softmax') (...)
    model = keras.Model(inputs=[inp], outputs=[out], name='unet')
    model.compile(loss='categorical_crossentropy',
                          optimizer='adam',
                          metrics=['accuracy'])
    model.fit(tensor4d, tensor4d)
    

    You can also compile using sparse_categorical_crossentropy and then train with output of shape (samples, height, width) where each pixel in the output corresponds to a class label: model.fit(tensor4d, tensor3d)

    The idea is that softmax and categorical_crossentropy will be applied to the last axis (you can check keras.backend.softmax and keras.backend.categorical_crossentropy doc).

    PS. I use keras from tensorflow.keras (tensorflow 2)

    Update: I have trained on my real dataset and it is working as well.

    0 讨论(0)
  • 2021-01-18 03:06

    Found this issue to confirm my intuition.

    In short : the softmax will take 2D or 3D inputs. If they are 3D keras will assume a shape like this (samples, timedimension, numclasses) and apply the softmax on the last one. For some weird reasons, it doesnt do that for 4D tensors.

    Solution : reshape your output to a sequence of pixels

    reshaped_output = Reshape((height*width, num_classes))(output_tensor)
    

    Then apply your softmax

    new_output = Activation('softmax')(reshaped_output) 
    

    And then either you reshape your target tensors to 2D or you just reshape that last layer into (width, height, num_classes).

    Otherwise, something I would try if I wasn't on my phone right now is to use a TimeDistributed(Activation('softmax')). But no idea if that would work... will try later

    I hope this helps :-)

    0 讨论(0)
  • 2021-01-18 03:06

    You could also not reshape anything and define both softmax and loss on your own. Here is softmax which is applied to the last input dimension (like in tf backend):

    def image_softmax(input):
        label_dim = -1
        d = K.exp(input - K.max(input, axis=label_dim, keepdims=True))
        return d / K.sum(d, axis=label_dim, keepdims=True)
    

    and here you have loss (there is no need to reshape anything):

    __EPS = 1e-5
    def image_categorical_crossentropy(y_true, y_pred):
        y_pred = K.clip(y_pred, __EPS, 1 - __EPS)
        return -K.mean(y_true * K.log(y_pred) + (1 - y_true) * K.log(1 - y_pred))
    

    No further reshapes need.

    0 讨论(0)
提交回复
热议问题