How can we perform early stopping with train_on_batch?

前端 未结 1 733
刺人心
刺人心 2021-01-24 00:21

I manually run the epochs in a loop, as well as further nested mini-batches in the loop. At each mini-batch, I need to call train_on_batch, to enable the training o

1条回答
  •  一整个雨季
    2021-01-24 00:33

    In practice, 'early stopping' is largely done via: (1) train for X epochs, (2) save the model each time it achieves a new best performance, (3) select the best model. "Best performance" defined as achieving the highest (e.g. accuracy) or lowest (e.g. loss) validation metric - example script below:

    best_val_loss = 999 # arbitrary init - should be high if 'best' is low, and vice versa
    num_epochs = 5
    epoch = 0
    
    while epoch < num_epochs:
        model.train_on_batch(x_train, y_train)  # get x, y somewhere in the loop
        val_loss = model.evaluate(x_val, y_val)
    
        if val_loss < best_val_loss:
            model.save(best_model_path) # OR model.save_weights()
            print("Best model w/ val loss {} saved to {}".format(val_loss, best_model_path))
        # ...
        epoch += 1
    

    See saving Keras models. If you rather early-stop directly, then define some metric - i.e. condition - that'll end the train loop. For example,

    while True:
        loss = model.train_on_batch(...)
        if loss < .02:
            break
    

    0 讨论(0)
提交回复
热议问题