f1_score metric in lightgbm

后端 未结 2 941
别跟我提以往
别跟我提以往 2021-01-01 02:54

I want to train a lgb model with custom metric : f1_score with weighted average.

I went through the advanced examples of lightgbm over here

2条回答
  •  有刺的猬
    2021-01-01 03:31

    Regarding Toby's answers:

    def lgb_f1_score(y_hat, data):
        y_true = data.get_label()
        y_hat = np.round(y_hat) # scikits f1 doesn't like probabilities
        return 'f1', f1_score(y_true, y_hat), True
    

    I suggest change the y_hat part to this:

    y_hat = np.where(y_hat < 0.5, 0, 1)  
    

    Reason: I used the y_hat = np.round(y_hat) and fonud out that during training the lightgbm model will sometimes(very unlikely but still a change) regard our y prediction to multiclass instead of binary.

    My speculation: Sometimes the y prediction will be small or higher enough to be round to negative value or 2?I'm not sure,but when i changed the code using np.where, the bug is gone.

    Cost me a morning to figure this bug,although I'm not really sure if the np.where solution is good.

提交回复
热议问题