How to calculate F1 Macro in Keras?

后端 未结 6 1606
逝去的感伤
逝去的感伤 2020-11-28 23:13

i\'ve tried to use the codes given from Keras before they\'re removed. Here\'s the code :

def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K         


        
相关标签:
6条回答
  • 2020-11-28 23:15

    Using a Keras metric function is not the right way to calculate F1 or AUC or something like that.

    The reason for this is that the metric function is called at each batch step at validation. That way the Keras system calculates an average on the batch results. And that is not the right F1 score.

    Thats the reason why F1 score got removed from the metric functions in keras. See here:

    • https://github.com/keras-team/keras/commit/a56b1a55182acf061b1eb2e2c86b48193a0e88f7
    • https://github.com/keras-team/keras/issues/5794

    The right way to do this is to use a custom callback function in a way like this:

    • https://github.com/PhilipMay/mltb#module-keras
    • https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2
    0 讨论(0)
  • 2020-11-28 23:18

    I also suggest this work-around

    • install keras_metrics package by ybubnov
    • call model.fit(nb_epoch=1, ...) inside a for loop taking advantage of the precision/recall metrics outputted after every epoch

    Something like this:

        for mini_batch in range(epochs):
            model_hist = model.fit(X_train, Y_train, batch_size=batch_size, epochs=1,
                                verbose=2, validation_data=(X_val, Y_val))
    
            precision = model_hist.history['val_precision'][0]
            recall = model_hist.history['val_recall'][0]
            f_score = (2.0 * precision * recall) / (precision + recall)
            print 'F1-SCORE {}'.format(f_score)
    
    0 讨论(0)
  • 2020-11-28 23:20

    This is a streaming custom f1_score metric that I made using subclassing. It works for TensorFlow 2.0 beta but I haven't tried it on other versions. What it's doing it keeping track of true positives, predicted positives, and all possible positives throughout the whole epoch and then calculating the f1 score at the end of the epoch. I think the other answers are only giving the f1 score for each batch which isn't really the best metric when we really want the f1 score of the all the data.

    I got a raw unedited copy of Aurélien Geron new book Hands-On Machine Learning with Scikit-Learn & Tensorflow 2.0 and highly recommend it. This is how I learned how to this f1 custom metric using sub-classes. It's hands down the most comprehensive TensorFlow book I've ever seen. TensorFlow is seriously a pain in the butt to learn and this guy lays down the coding groundwork to learn a lot.

    FYI: In the Metrics, I had to put the parenthesis in f1_score() or else it wouldn't work.

    pip install tensorflow==2.0.0-beta1

    from sklearn.model_selection import train_test_split
    import tensorflow as tf
    from tensorflow import keras
    import numpy as np
    
    def create_f1():
        def f1_function(y_true, y_pred):
            y_pred_binary = tf.where(y_pred>=0.5, 1., 0.)
            tp = tf.reduce_sum(y_true * y_pred_binary)
            predicted_positives = tf.reduce_sum(y_pred_binary)
            possible_positives = tf.reduce_sum(y_true)
            return tp, predicted_positives, possible_positives
        return f1_function
    
    
    class F1_score(keras.metrics.Metric):
        def __init__(self, **kwargs):
            super().__init__(**kwargs) # handles base args (e.g., dtype)
            self.f1_function = create_f1()
            self.tp_count = self.add_weight("tp_count", initializer="zeros")
            self.all_predicted_positives = self.add_weight('all_predicted_positives', initializer='zeros')
            self.all_possible_positives = self.add_weight('all_possible_positives', initializer='zeros')
    
        def update_state(self, y_true, y_pred,sample_weight=None):
            tp, predicted_positives, possible_positives = self.f1_function(y_true, y_pred)
            self.tp_count.assign_add(tp)
            self.all_predicted_positives.assign_add(predicted_positives)
            self.all_possible_positives.assign_add(possible_positives)
    
        def result(self):
            precision = self.tp_count / self.all_predicted_positives
            recall = self.tp_count / self.all_possible_positives
            f1 = 2*(precision*recall)/(precision+recall)
            return f1
    
    X = np.random.random(size=(1000, 10))     
    Y = np.random.randint(0, 2, size=(1000,))
    X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
    
    model = keras.models.Sequential([
        keras.layers.Dense(5, input_shape=[X.shape[1], ]),
        keras.layers.Dense(1, activation='sigmoid')
    ])
    
    model.compile(loss='binary_crossentropy', optimizer='SGD', metrics=[F1_score()])
    
    history = model.fit(X_train, y_train, epochs=5, validation_data=(X_test, y_test))
    
    0 讨论(0)
  • 2020-11-28 23:24

    As what @Pedia has said in his comment above, on_epoch_end,as stated in the github.com/fchollet/keras/issues/5400 is the best approach.

    0 讨论(0)
  • 2020-11-28 23:30

    As @Diesche mentioned the main problem in implementing f1_score this way is that it is called at every batch step and leads to confusing results more than anything else.

    I've been struggling some time with this issue but eventually worked my way around the problem by using a callback: at the end of an epoch the callback predicts on the data (in this case I chose to only apply it to my validation data) with the new model parameters and gives you coherent metrics evaluated on the whole epoch.

    I'm using tensorflow-gpu (1.14.0) on python3

    from tensorflow.python.keras.models import Sequential, Model
    from sklearn.metrics import  f1_score
    from tensorflow.keras.callbacks import Callback
    from tensorflow.python.keras import optimizers
    
    
    
    optimizer = optimizers.SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=['accuracy'])
    model.summary()
    
    class Metrics(Callback):
        def __init__(self, model, valid_data, true_outputs):
            super(Callback, self).__init__()
            self.model=model
            self.valid_data=valid_data    #the validation data I'm getting metrics on
            self.true_outputs=true_outputs    #the ground truth of my validation data
            self.steps=len(self.valid_data)
    
    
        def on_epoch_end(self, args,*kwargs):
            gen=generator(self.valid_data)     #generator yielding the validation data
            val_predict = (np.asarray(self.model.predict(gen, batch_size=1, verbose=0, steps=self.steps)))
    
            """
            The function from_proba_to_output is used to transform probabilities  
            into an understandable format by sklearn's f1_score function
            """
            val_predict=from_proba_to_output(val_predict, 0.5)
            _val_f1 = f1_score(self.true_outputs, val_predict)
            print ("val_f1: ", _val_f1, "   val_precision: ", _val_precision, "   _val_recall: ", _val_recall)
    

    The function from_proba_to_output goes as follows:

    def from_proba_to_output(probabilities, threshold):
        outputs = np.copy(probabilities)
        for i in range(len(outputs)):
    
            if (float(outputs[i])) > threshold:
                outputs[i] = int(1)
            else:
                outputs[i] = int(0)
        return np.array(outputs)
    

    I then train my model by referencing this metrics class in the callbacks part of fit_generator. I did not detail the implementation of my train_generator and valid_generator as these data generators are specific to the classification problem at hand and posting them would only bring confusion.

        model.fit_generator(
    train_generator, epochs=nbr_epochs, verbose=1, validation_data=valid_generator, callbacks=[Metrics(model, valid_data)])
    
    0 讨论(0)
  • 2020-11-28 23:33

    since Keras 2.0 metrics f1, precision, and recall have been removed. The solution is to use a custom metric function:

    from keras import backend as K
    
    def f1(y_true, y_pred):
        def recall(y_true, y_pred):
            """Recall metric.
    
            Only computes a batch-wise average of recall.
    
            Computes the recall, a metric for multi-label classification of
            how many relevant items are selected.
            """
            true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
            possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
            recall = true_positives / (possible_positives + K.epsilon())
            return recall
    
        def precision(y_true, y_pred):
            """Precision metric.
    
            Only computes a batch-wise average of precision.
    
            Computes the precision, a metric for multi-label classification of
            how many selected items are relevant.
            """
            true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
            predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
            precision = true_positives / (predicted_positives + K.epsilon())
            return precision
        precision = precision(y_true, y_pred)
        recall = recall(y_true, y_pred)
        return 2*((precision*recall)/(precision+recall+K.epsilon()))
    
    
    model.compile(loss='binary_crossentropy',
              optimizer= "adam",
              metrics=[f1])
    

    The return line of this function

    return 2*((precision*recall)/(precision+recall+K.epsilon()))
    

    was modified by adding the constant epsilon, in order to avoid division by 0. Thus NaN will not be computed.

    0 讨论(0)
提交回复
热议问题