How do I need to modify exporting a keras model to accept b64 string to RESTful API/Google cloud ML

允我心安 提交于 2019-12-06 14:46:30

Looking at the docs you've provided what you're looking to do is to take the image and send it in to the API. Images are easily transferable in a text format if you encode them, base64 being pretty much the standard. So what we want to do is create a json object with the image as base64 in the right place and then send this json object into the REST api. python has the requests library which makes sending in a python dictionary as JSON very easy.

So take the image, encode it, put it in a dictionary and send it off using requests:

import requests
import base64

encoded_image = None
with open("image.png", "rb") as image_file:
    encoded_image = base64.b64encode(image_file.read())

object_for_api = {"signature_name": "predict",
                  "instances": [
                      {
                          "image": {"b64": encoded_image}
                      }]
                  }

requests.post(url='http://localhost:8501/v1/models/mnist:predict', json=object_for_api)

You can also encode your numpy array into JSON but it doesn't seem that the API docs are looking for that.

Two side notes:

  1. I encourage you to use tf.saved_model.simple_save
  2. You may find model_to_estimator convenient.
  3. While your model seems like it will work for requests (the output of saved_model_cli shows the outer dimension is None for both inputs and outputs), it's fairly inefficient to send JSON arrays of floats

To the last point, it's often easier to modify the code to do the image decoding server side so you're sending a base64 encoded JPG or PNG over the wire instead of an array of floats. Here's one example for Keras (I plan to update that answer with simpler code).

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!