问题
Early stopping is turned on by default for h2o.deeplearning()
. But, from R, how do I find out if it did stop early, and how many epochs it did? I've tried this:
model = h2o.deeplearning(...)
print(model)
which tells me information on the layers, the MSE, R2, etc. but nothing about how many epochs were run.
Over on Flow I can see the information (e.g. where the x-axis stops in the "Scoring History - Deviance" chart, or in the Scoring History table).
回答1:
If your model is called m
, then to get just the number of epochs trained: last(m@model$scoring_history$epochs)
To see what other information is available (which is literally everything you can see in the Flow interface) and how to access it, use str(m)
Also be aware of this command: summary(m)
In addition to what is shown with print(m)
it adds this section (for a deeplearning model):
Scoring History:
timestamp duration training_speed epochs iterations samples training_MSE training_deviance training_r2
1 2016-04-14 11:35:46 0.000 sec 0.00000 0 0.000000
2 2016-04-14 11:35:52 5.218 sec 15139 rows/sec 10.00000 1 77150.000000 0.00000 0.00000 0.07884
...
7 2016-04-14 11:36:18 31.346 sec 25056 rows/sec 100.00000 10 771500.000000 0.00000 0.00000 0.72245
I.e. You can see total number of epochs by looking at the last row.
BTW, this is different to h2o's summary()
command when applied to a data frame; in that case it behaves like R's built-in summary function, and shows statistics on each column in the data frame.
回答2:
I'm quite confident in stating that the answer of Darren Cook is valid only when overwrite_with_best_model=FALSE
. Anyway, this parameter is set to be TRUE
by default, so the previous answer can be quite misleading for reasons that you can partially find here. You can check what I mean in the following output obtained tuning the network with h2o.grid
and using m@model$scoring_history
as Darren suggested.
epochs validation_classification_error
0.00000 0.46562
1.43150 0.50000
100.31780 0.46562
As you can see, if overwrite_with_best_model=TRUE
than the functions saves the best model in the last iteration, thus solution of Darren always corresponds in the maximum number of epochs. Assuming that you are tuning your model, I recommend the following solution:
epochsList = m@model$scoring_history$epochs
bestEpochIndex = which.min(m@model$scoring_history$validation_classification_error)
bestEpoch = epochsList[bestEpochIndex]
print(sprintf("The best epoch is: %d", bestEpoch))
来源:https://stackoverflow.com/questions/36620585/how-do-know-how-many-deep-learning-epochs-were-done-from-r