Tensorflow: loss decreasing, but accuracy stable

后端 未结 2 1838
悲哀的现实
悲哀的现实 2021-02-12 13:15

My team is training a CNN in Tensorflow for binary classification of damaged/acceptable parts. We created our code by modifying the cifar10 example code. In my prior experience

相关标签:
2条回答
  • 2021-02-12 13:53

    Here are my suggestions, one of the possible problems is that your network start to memorize data, yes you should increase regularization,

    update: Here I want to mention one more problem that may cause this: The balance ratio in the validation set is much far away from what you have in the training set. I would recommend, at first step try to understand what is your test data (real-world data, the one your model will face in inference time) descriptive look like, what is its balance ratio, and other similar characteristics. Then try to build such a train/validation set almost with the same descriptive you achieve for real data.

    0 讨论(0)
  • 2021-02-12 14:01

    A decrease in binary cross-entropy loss does not imply an increase in accuracy. Consider label 1, predictions 0.2, 0.4 and 0.6 at timesteps 1, 2, 3 and classification threshold 0.5. timesteps 1 and 2 will produce a decrease in loss but no increase in accuracy.

    Ensure that your model has enough capacity by overfitting the training data. If the model is overfitting the training data, avoid overfitting by using regularization techniques such as dropout, L1 and L2 regularization and data augmentation.

    Last, confirm your validation data and training data come from the same distribution.

    0 讨论(0)
提交回复
热议问题