问题
I'm ready to try out my TensorFlow Serving REST API based on a saved model, and was wondering if there was an easy way to generate the JSON instances (row-based) or inputs (columnar) I need to send with my request.
I have several thousand features in my model and I would hate to manually type in a JSON. Is there a way I can use existing data to come up with serialized data I can throw at the predict API?
I'm using TFX for the entire pipeline (incl. tf.Transform), so I'm not sure if there is a neat way built into TFX I can use.
The output from saved_model_cli
is this:
The given SavedModel SignatureDef contains the following input(s):
inputs['examples'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_example_tensor:0
Which does not tell me much.
回答1:
You can use a Python REST client to make the call programatically, instead of manually composing the request. This is a sample code in tensorflow_serving github:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/resnet_client.py
回答2:
You can try the below code:
examples = []
for _, row in Inputs.iterrows():
example = tf.train.Example()
for col, value in row.iteritems():
example.features.feature[col].float_list.value.append(value)
examples.append(example)
print(examples)
It's output will be a json as shown below:
[features {
feature {
key: "PetalLength"
value {
float_list {
value: 5.900000095367432
}
}
}
feature {
key: "PetalWidth"
value {
float_list {
value: 2.0999999046325684
}
}
}
feature {
key: "SepalLength"
value {
float_list {
value: 7.099999904632568
}
}
}
feature {
key: "SepalWidth"
value {
float_list {
value: 3.0
}
}
}
}
]
Then you can perform inference using the below command:
curl -d '{"inputs":examples}' \
-X POST http://localhost:8501/v1/models/1554294699:predict
来源:https://stackoverflow.com/questions/55632362/generate-instances-or-inputs-for-tensorflow-serving-rest-api