Tensorflow: loss decreasing, but accuracy stable

核能气质少年 提交于 2020-08-22 03:24:33

问题


My team is training a CNN in Tensorflow for binary classification of damaged/acceptable parts. We created our code by modifying the cifar10 example code. In my prior experience with Neural Networks, I always trained until the loss was very close to 0 (well below 1). However, we are now evaluating our model with a validation set during training (on a separate GPU), and it seems like the precision stopped increasing after about 6.7k steps, while the loss is still dropping steadily after over 40k steps. Is this due to overfitting? Should we expect to see another spike in accuracy once the loss is very close to zero? The current max accuracy is not acceptable. Should we kill it and keep tuning? What do you recommend? Here is our modified code and graphs of the training process.

https://gist.github.com/justineyster/6226535a8ee3f567e759c2ff2ae3776b

Precision and Loss Images


回答1:


A decrease in binary cross-entropy loss does not imply an increase in accuracy. Consider label 1, predictions 0.2, 0.4 and 0.6 at timesteps 1, 2, 3 and classification threshold 0.5. timesteps 1 and 2 will produce a decrease in loss but no increase in accuracy.

Ensure that your model has enough capacity by overfitting the training data. If the model is overfitting the training data, avoid overfitting by using regularization techniques such as dropout, L1 and L2 regularization and data augmentation.

Last, confirm your validation data and training data come from the same distribution.




回答2:


Here is my suggestions, one of the possible problems is that your network start to memorize data, yes you should increase regularization,

yes kill it, with decreasing loss for training and have the stable precision for validation, that means your network capacity is low (weak model) try to go deeper.



来源:https://stackoverflow.com/questions/43499199/tensorflow-loss-decreasing-but-accuracy-stable

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!