Why val_loss and val_acc are not displaying?

六月ゝ 毕业季﹏ 提交于 2021-02-19 03:19:38

问题


When the training starts, in the run window only loss and acc are displayed, the val_loss and val_acc are missing. Only at the end, these values are showed.

model.add(Flatten())
model.add(Dense(512, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(10, activation="softmax"))

model.compile(
    loss='categorical_crossentropy',
    optimizer="adam",
    metrics=['accuracy']
)

model.fit(
    x_train,
    y_train,
    batch_size=32, 
    epochs=1, 
    validation_data=(x_test, y_test),
    shuffle=True
)

this is how the training starts:

Train on 50000 samples, validate on 10000 samples
Epoch 1/1

   32/50000 [..............................] - ETA: 34:53 - loss: 2.3528 - acc: 0.0938
   64/50000 [..............................] - ETA: 18:56 - loss: 2.3131 - acc: 0.0938
   96/50000 [..............................] - ETA: 13:45 - loss: 2.3398 - acc: 0.1146

and this is when it finishes

49984/50000 [============================>.] - ETA: 0s - loss: 1.5317 - acc: 0.4377
50000/50000 [==============================] - 231s 5ms/step - loss: 1.5317 - acc: 0.4378 - val_loss: 1.1503 - val_acc: 0.5951

I want to see the val_acc and val_loss in each line


回答1:


Validation loss and accuracy are computed on epoch end, not on batch end. If you want to compute those values after each batch, you would have to implement your own callback with an on_batch_end() method and call self.model.evaluate() on the validation set. See https://keras.io/callbacks/.

But computing the validation loss and accuracy after each epoch is going to slow down your training a lot and doesn't bring much in terms of evaluation of the network performance.




回答2:


It doesn't make much sense to compute the validation metrics at each iteration, because it would make your training process much slower and your model doesn't change that much from iteration to iteration. On the other hand it makes much more sense to compute these metrics at the end of each epoch.

In your case you have 50000 samples on the training set and 10000 samples on the validation set and a batch size of 32. If you were to compute the val_loss and val_acc after each iteration it would mean that for every 32 training samples updating your weights you would have 313 (i.e. 10000/32) iterations of 32 validation samples. Since your every epoch consists of 1563 iterations (i.e. 50000/32), you'd have to perform 489219 (i.e. 313*1563) batch predictions just for evaluating the model. This would cause your model to train several orders of magnitude slower!


If you still want to compute the validation metrics at the end of each iteration (not recommended for the reasons stated above), you could simply shorten your "epoch" so that your model sees just 1 batch per epoch:

model.fit(
    x_train,
    y_train,
    batch_size=32, 
    epochs=len(x_train) // batch_size + 1,  # 1563 in your case
    steps_per_epoch=1, 
    validation_data=(x_test, y_test),
    shuffle=True
    )

This isn't exactly equivalent because the samples will be drawn at random, with replacement, from the data but it is the easiest you can get...



来源:https://stackoverflow.com/questions/55746382/why-val-loss-and-val-acc-are-not-displaying

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!