facenet triplet loss with keras

后端 未结 4 1779
野趣味
野趣味 2021-01-31 09:50

I am trying to implement facenet in Keras with Thensorflow backend and I have some problem with the triplet loss.

I call the fit function with 3*n number of images and t

相关标签:
4条回答
  • 2021-01-31 10:25

    Are you constraining your embeddings to "be on a d-dimensional hypersphere"? Try running tf.nn.l2_normalize on your embeddings right after they come out of the CNN.

    The problem could be that the embeddings are sort of being smart-alecs. One easy way to reduce the loss is to just set everything to zero. l2_normalize forces them to be unit length.

    It looks you'll want to add the normalizing right after the last average pool.

    0 讨论(0)
  • 2021-01-31 10:28

    What could have happened, other than the learning rate was simply too high, was that an unstable triplet selection strategy had been used, effectively. If, for example, you only use 'hard triplets' (triplets where the a-n distance is smaller than the a-p distance), your network weights might collapse all embeddings to a single point (making the loss always equal to margin (your _alpha), because all embedding distances are zero).

    This can be fixed by using other kinds of triplets as well (like 'semi-hard triplets' where a-p is smaller than a-n, but the distance between a-p and a-n is still smaller than margin). So maybe if you always checked for this... It is explained in more detail in this blog post: https://omoindrot.github.io/triplet-loss

    0 讨论(0)
  • 2021-01-31 10:40

    I have met the same problem, and I did some research work. I think it is because triplelet loss needs multiple inputs, which may cause the network to generate outputs like that. I haven't fix the problem yet, but you can check the issue page of keras for more details https://github.com/keras-team/keras/issues/9498.

    In the issue page, I implemented a fake dataset and a fake triplet loss to repreduce the problem, after I changed the input structure of the network, the loss becomes normal

    0 讨论(0)
  • 2021-01-31 10:42

    the loss function in tensorflow requires a list of labels, i.e. list of integers. I think you are passing a 2D matrix, i.e. one hot encoding.

    Try this

    import keras.backend as K
    from tf.contrib.losses.metric_learning import triplet_semihard_loss
    
    def loss(y_true, y_pred):
        y_true = K.argmax(y_true, axis = -1)
        return triplet_semihard_loss(labels=y_true, embeddings=y_pred, margin=1.)
    
    0 讨论(0)
提交回复
热议问题