How to get 'keys' in batch predictions with ml-engine using a custom model?

霸气de小男生 提交于 2020-01-16 17:04:08

问题


I have been working on deployment of a custom estimator (tensorflow model). After training on ml-engine everything is Ok, but when use ml-engine predictions in batch model I could not get the key (or any id of the original input) as you know batch predictions is in distributed mode and "keys" helps to understand which predictions correspond. I found this post where solve this problem, but using a pre-made (canned) tensorflow model (census use case). How can adapt my custom model (tf.contrib.learn.Estimator()) in order to get "keys" in prediction? An example of my output file:

{"predicted": [0.04930919408798218, 0.05402487516403198, 0.059984803199768066, 0.017936021089553833]}

And my model function is as follows:

SEQ_LEN = 12
DEFAULTS = [[0.0] for x in range(0, SEQ_LEN)]
BATCH_SIZE = 32
TIMESERIES_COL = 'rawdata'
N_OUTPUTS = 4  # in each sequence, 1-8 are features, and 9-12 are labels
N_INPUTS = SEQ_LEN - N_OUTPUTS
LSTM_SIZE = 10 # number of hidden layers in each of the LSTM cells
LAMBDA_L2_REG = 0 # regularization coefficient


def simple_rnn(features, targets, mode):
    # 0. Reformat input shape to become a sequence
    x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1)
    #print 'x={}'.format(x)

    # 1. configure the RNN
    lstm_cell = tf.contrib.rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
    outputs, _ = tf.contrib.rnn.static_rnn(lstm_cell, x, dtype=tf.float32)

    # slice to keep only the last cell of the RNN
    outputs = outputs[-1]
    #print 'last outputs={}'.format(outputs)

    # output is result of linear activation of last layer of RNN
    w = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
    b = tf.Variable(tf.random_normal([N_OUTPUTS]))
    predictions = tf.matmul(outputs, w) + b

    # 2. loss function, training/eval ops
    if mode == tf.contrib.learn.ModeKeys.TRAIN or mode == tf.contrib.learn.ModeKeys.EVAL:
        l2_reg = tf.reduce_mean(tf.nn.l2_loss(w))
        loss = tf.losses.mean_squared_error(targets, predictions)+LAMBDA_L2_REG*l2_reg
        train_op = tf.contrib.layers.optimize_loss(
            loss=loss,
            global_step=tf.contrib.framework.get_global_step(),
            #learning_rate=0.01,
            learning_rate = tf.train.exponential_decay(0.01, tf.contrib.framework.get_global_step(),500, 0.96, staircase=True),
            optimizer="Adam",
            clip_gradients=2.5)
        eval_metric_ops = {
    "rmse": tf.metrics.root_mean_squared_error(targets, predictions)
    }
    else:
        loss = None
        train_op = None
        eval_metric_ops = None

    # 3. Create predictions
    predictions_dict = {"predicted": predictions}

    # 4. return ModelFnOps
    return tf.contrib.learn.ModelFnOps(
        mode=mode,
        predictions=predictions_dict,
        loss=loss,
        train_op=train_op,
        eval_metric_ops=eval_metric_ops)

I use python 2.7 and tensorflow 1.6. Thanks in advance!


回答1:


What you are looking for is forward_features. However, there is a bug in that function in which the model export didn't work correctly; the fix looks like it won't land until TF 1.8.

There is more info in this answer, including a potential workaround, repeated here for your convenience (taken from this code sample):

def forward_key_to_export(estimator):
    estimator = tf.contrib.estimator.forward_features(estimator, KEY_COLUMN)
    # return estimator

    ## This shouldn't be necessary (I've filed CL/187793590 to update extenders.py with this code)
    config = estimator.config
    def model_fn2(features, labels, mode):
      estimatorSpec = estimator._call_model_fn(features, labels, mode, config=config)
      if estimatorSpec.export_outputs:
        for ekey in ['predict', 'serving_default']:
          if (ekey in estimatorSpec.export_outputs and
              isinstance(estimatorSpec.export_outputs[ekey],
                         tf.estimator.export.PredictOutput)):
               estimatorSpec.export_outputs[ekey] = \
                 tf.estimator.export.PredictOutput(estimatorSpec.predictions)
      return estimatorSpec
    return tf.estimator.Estimator(model_fn=model_fn2, config=config)
    ##

To use it, you would do something like this:

estimator = build_estimator(...)
estimator = forward_key_to_export(estimator)


来源:https://stackoverflow.com/questions/49542369/how-to-get-keys-in-batch-predictions-with-ml-engine-using-a-custom-model

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!