How to save the best hyperopt optimized keras models and its weights?

╄→尐↘猪︶ㄣ 提交于 2020-05-13 03:03:04

问题


I optimized my keras model using hyperopt. Now how do we save the best optimized keras model and its weights to disk.

My code:

from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
from sklearn.metrics import roc_auc_score
import sys

X = []
y = []
X_val = []
y_val = []

space = {'choice': hp.choice('num_layers',
                    [ {'layers':'two', },
                    {'layers':'three',
                    'units3': hp.uniform('units3', 64,1024), 
                    'dropout3': hp.uniform('dropout3', .25,.75)}
                    ]),

            'units1': hp.choice('units1', [64,1024]),
            'units2': hp.choice('units2', [64,1024]),

            'dropout1': hp.uniform('dropout1', .25,.75),
            'dropout2': hp.uniform('dropout2',  .25,.75),

            'batch_size' : hp.uniform('batch_size', 20,100),

            'nb_epochs' :  100,
            'optimizer': hp.choice('optimizer',['adadelta','adam','rmsprop']),
            'activation': 'relu'
        }

def f_nn(params):   
    from keras.models import Sequential
    from keras.layers.core import Dense, Dropout, Activation
    from keras.optimizers import Adadelta, Adam, rmsprop

    print ('Params testing: ', params)
    model = Sequential()
    model.add(Dense(output_dim=params['units1'], input_dim = X.shape[1])) 
    model.add(Activation(params['activation']))
    model.add(Dropout(params['dropout1']))

    model.add(Dense(output_dim=params['units2'], init = "glorot_uniform")) 
    model.add(Activation(params['activation']))
    model.add(Dropout(params['dropout2']))

    if params['choice']['layers']== 'three':
        model.add(Dense(output_dim=params['choice']['units3'], init = "glorot_uniform")) 
        model.add(Activation(params['activation']))
        model.add(Dropout(params['choice']['dropout3']))    

    model.add(Dense(1))
    model.add(Activation('sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer=params['optimizer'])

    model.fit(X, y, nb_epoch=params['nb_epochs'], batch_size=params['batch_size'], verbose = 0)

    pred_auc =model.predict_proba(X_val, batch_size = 128, verbose = 0)
    acc = roc_auc_score(y_val, pred_auc)
    print('AUC:', acc)
    sys.stdout.flush() 
    return {'loss': -acc, 'status': STATUS_OK}


trials = Trials()
best = fmin(f_nn, space, algo=tpe.suggest, max_evals=100, trials=trials)
print 'best: '
print best

回答1:


I don't know how to send some variable to f_nn or another hyperopt target explicilty. But I've use two approaches to do the same task.
First approach is some global variable (don't like it, because it's non-clear) and the second is to save the metric value to the file, then read and compare with a current metric. The last approach seems to me better.

def f_nn(params):
    ...
    # I omit a part of the code   
    pred_auc =model.predict_proba(X_val, batch_size = 128, verbose = 0)
    acc = roc_auc_score(y_val, pred_auc)

    try:
        with open("metric.txt") as f:
            min_acc = float(f.read().strip())  # read best metric,
    except FileNotFoundError:
            min_acc = acc  # else just use current value as the best

    if acc < min_acc:
         model.save("model.hd5")  # save best to disc and overwrite metric
         with open("metric.txt", "w") as f:
             f.write(str(acc))

    print('AUC:', acc)
    sys.stdout.flush() 
    return {'loss': -acc, 'status': STATUS_OK}

trials = Trials()
best = fmin(f_nn, space, algo=tpe.suggest, max_evals=100, trials=trials)
print 'best: '
print best

from keras.models import load_model
best_model = load_model("model.hd5")

This approach has several advantages: you can keep metric and model together, and even apply to it some version or data version control system - so you can restore results of an experiment in the future.

Edit
It can cause an unexpected behaviour, if there's some metric from a previous run, but you don't delete it. So you can adopt the code - remove the metric after the optimization or use timestamp etc. to distinguish your experimets' data.




回答2:


Trials class object stores many relevant information related with each iteration of hyperopt. We can also ask this object to save trained model. You have to make few small changes in your code base to achieve this.

-- return {'loss': -acc, 'status': STATUS_OK}
++ return {'loss':loss, 'status': STATUS_OK, 'Trained_Model': model}

Note:'Trained_Model' just a key and you can use any other string.

best = fmin(f_nn, space, algo=tpe.suggest, max_evals=100, trials=trials)
model = getBestModelfromTrials(trials)

Retrieve the trained model from the trials object:

import numpy as np
from hyperopt import STATUS_OK
def getBestModelfromTrials(trials):
    valid_trial_list = [trial for trial in trials
                            if STATUS_OK == trial['result']['status']]
    losses = [ float(trial['result']['loss']) for trial in valid_trial_list]
    index_having_minumum_loss = np.argmin(losses)
    best_trial_obj = valid_trial_list[index_having_minumum_loss]
    return best_trial_obj['result']['Trained_Model']

Note: I have used this approach in Scikit-Learn classes.




回答3:


Make f_nn return the model.

def f_nn(params):
    # ...
    return {'loss': -acc, 'status': STATUS_OK, 'model': model}

The models will be available on trials object under results. I put in some sample data and got print(trials.results) to spit out

[{'loss': 2.8245880603790283, 'status': 'ok', 'model': <keras.engine.training.Model object at 0x000001D725F62B38>}, {'loss': 2.4592788219451904, 'status': 'ok', 'model': <keras.engine.training.Model object at 0x000001D70BC3ABA8>}]

Use np.argmin to find the smallest loss, then save using model.save

trials.results[np.argmin([r['loss'] for r in trials.results])]['model']

(Side note, in C# this would be trials.results.min(r => r.loss).model... if there's a better way to do this in Python please let me know!)

You may wish to use attachments on the trial object if you're using MongoDB, as the model may be very large:

attachments - a dictionary of key-value pairs whose keys are short strings (like filenames) and whose values are potentially long strings (like file contents) that should not be loaded from a database every time we access the record. (Also, MongoDB limits the length of normal key-value pairs so once your value is in the megabytes, you may have to make it an attachment.) Source.



来源:https://stackoverflow.com/questions/54273199/how-to-save-the-best-hyperopt-optimized-keras-models-and-its-weights

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!