I don\'t understand which accuracy in the output to use to compare my 2 Keras models to see which one is better.
Do I use the \"acc\" (from the training data?) one or t
Do I use the "acc" (from the training data?) one or the "val acc" (from the validation data?) one?
If you want to estimate the ability of your model to generalize to new data (which is probably what you want to do), then you look at the validation accuracy, because the validation split contains only data that the model never sees during the training and therefor cannot just memorize.
If your training data accuracy ("acc") keeps improving while your validation data accuracy ("val_acc") gets worse, you are likely in an overfitting situation, i.e. your model starts to basically just memorize the data.
There are different accs and val accs for each epoch. How do I know the acc or val acc for my model as a whole? Do I average all of the epochs accs or val accs to find the acc or val acc of the model as a whole?
Each epoch is a training run over all of your data. During that run the parameters of your model are adjusted according to your loss function. The result is a set of parameters which have a certain ability to generalize to new data. That ability is reflected by the validation accuracy. So think of every epoch as its own model, which can get better or worse if it is trained for another epoch. Whether it got better or worse is judged by the change in validation accuracy (better = validation accuracy increased). Therefore pick the model of the epoch with the highest validation accuracy. Don't average the accuracies over different epochs, that wouldn't make much sense. You can use the Keras callback ModelCheckpoint
to automatically save the model with the highest validation accuracy (see callbacks documentation).
The highest accuracy in model 1 is 0.7737
and the highest one in model 2 is 0.7572
. Therefore you should view model 1 (at epoch 3) as better. Though it is possible that the 0.7737
was just a random outlier.
You need to key on decreasing val_loss or increasing val_acc, ultimately it doesn't matter much. The differences are well within random/rounding errors.
In practice, the training loss can drop significantly due to over-fitting, which is why you want to look at validation loss.
In your case, you can see that your training loss is not dropping - which means you are learning nothing after each epoch. It look like there's nothing to learn in this model, aside from some trivial linear-like fit or cutoff value.
Also, when learning nothing, or a trivial linear thing, you should a similar performance on training and validation (trivial learning is always generalizable). You should probably shuffle your data before using the validation_split feature.