问题
I'd like to know the specificity and sensitivity of my model. Currently, I'm evaluating the model after all epochs are finished:
from sklearn.metrics import confusion_matrix
predictions = model.predict(x_test)
y_test = np.argmax(y_test, axis=-1)
predictions = np.argmax(predictions, axis=-1)
c = confusion_matrix(y_test, predictions)
print('Confusion matrix:\n', c)
print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0]))
print('specificity', c[1, 1] / (c[1, 1] + c[1, 0]))
The disadvantage of this approach, is I only get the output I care about when training has finished. Would prefer to get metrics every 10 epochs or so.
BTW: Tried with the metrics=[] here. Possibly a callback is the way to go?
回答1:
A custom Callback would be a nice solution giving you enough control over the training procedure. Something along the lines of:
class SensitivitySpecificityCallback(Callback):
def on_epoch_end(self, epoch, logs=None):
if epoch % 10 == 1:
x_test = self.validation_data[0]
y_test = self.validation_data[1]
# x_test, y_test = self.validation_data
predictions = self.model.predict(x_test)
y_test = np.argmax(y_test, axis=-1)
predictions = np.argmax(predictions, axis=-1)
c = confusion_matrix(y_test, predictions)
print('Confusion matrix:\n', c)
print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0]))
print('specificity', c[1, 1] / (c[1, 1] + c[1, 0]))
where epoch
is the epoch number and logs
contain the usual metrics + loss the model trains.
Then run with:
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
shuffle='batch',
validation_data=(x_test, y_test),
callbacks=[SensitivitySpecificityCallback()])
NOTE: if you don't like how your model is training based on your metrics you can cut the training short with:
self.model.stop_training = True
which will stop the training for you.
来源:https://stackoverflow.com/questions/50568409/report-keras-model-evaluation-metrics-every-10-epochs