Validation accuracy is always greater than training accuracy in Keras

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-03 03:12:09

From the Keras FAQ:

Why is the training loss much higher than the testing loss?

A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time.

Besides, the training loss is the average of the losses over each batch of training data. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.

So the behaviour you see is not as unusual as it might seem after reading ML theory. This also explains that when you evaluate both the training and test set on the same model, you suddenly do get the expected behaviour (train acc > val acc). I would guess that in your case the presence of dropout especially prevents the accuracy from going to 1.0 during training, while it achieves it during evaluation (testing).

You can further investigate by adding a callback that saves your model at every epoch. Then you can evaluate each of the saved models with both sets to recreate your plots.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!