问题
In my model evaluation algorithm I would like to get model predictions for validation data and apply an algorithm which models and imitates some real-world scenarios based on validation data and predictions.
In my scenario evaluation algorithm depends not only on true target values (y_true) and predictions (y_pred), but also on input validation data (X) to output a final model score. Thus it seems like I can not use an estimator with a custom metric for my use case.
It is trivial for me to implement evaluation/scoring algorithm, but how can I pass the output of the evaluation algorithm back to ML Engine's hyperparameter tuning task so that it can actually optimise hyperparameters and output best hyperparameter values in the end of hyperparameter tuning task?
回答1:
Once you have implemented the evaluation/scoring algorithm, use the hypertune package to write out the number:
import hypertune
hpt = hypertune.HyperTune()
# every time you evaluate, write out the evaluation metric
eval_output_value = your_evaluation_algo(...)
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='mymetric',
metric_value=eval_output_value,
global_step=0)
Then, specify the metric_tag above as the evaluation metric to CMLE.
You can install hypertune using PyPI:
pip install cloudml-hypertune
In the setup.py of your trainer package, make sure to specify the hypertune package:
install_requires=[
..., # other dependencies
'cloudml-hypertune', # Required for hyperparameter tuning.
],
See https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/blogs/sklearn/babyweight for an example that uses scikit-learn, and so can not rely on TensorFlow's Estimator API to write out evaluation metrics.
来源:https://stackoverflow.com/questions/53788942/how-to-tune-hyperparameters-using-custom-model-evaluation-algorithm