问题
I am trying use an embeddings module from tensorflow hub as servable. I am new to tensorflow. Currently, I am using Universal Sentence Encoder embeddings as a lookup to convert sentences to embeddings and then using those embeddings to find a similarity to another sentence.
My current code to convert sentences into embeddings is:
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
sen_embeddings = session.run(self.embed(prepared_text))
Prepared_text is a list of sentences. How do I take this model and make it a servable?
回答1:
Right now you probably need to do this by hand. Here is my solution, similar to previous answer but more general - show how to use any other module without guessing input parameters, as well as extended with verification and usage:
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.saved_model import simple_save
export_dir = "/tmp/tfserving/universal_encoder/00000001"
with tf.Session(graph=tf.Graph()) as sess:
module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder/2")
input_params = module.get_input_info_dict()
# take a look at what tensor does the model accepts - 'text' is input tensor name
text_input = tf.placeholder(name='text', dtype=input_params['text'].dtype,
shape=input_params['text'].get_shape())
sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
embeddings = module(text_input)
simple_save(sess,
export_dir,
inputs={'text': text_input},
outputs={'embeddings': embeddings},
legacy_init_op=tf.tables_initializer())
Thanks to module.get_input_info_dict()
you know what tensor names you need to pass to the model - you use this name as a key for inputs={}
in simple_save
method.
Remember that to serve the model it needs to be in directory path ending with version, that's why '00000001'
is the last path in which saved_model.pb
resides.
After exporting your module, quickest way to see if your model is exported properly for serving is to use saved_model_cli API:
saved_model_cli run --dir /tmp/tfserving/universal_encoder/00000001 --tag_set serve --signature_def serving_default --input_exprs 'text=["what this is"]'
To serve the model from docker:
docker pull tensorflow/serving
docker run -p 8501:8501 -v /tmp/tfserving/universal_encoder:/models/universal_encoder -e MODEL_NAME=universal_encoder -t tensorflow/serving
回答2:
Currently, the hub modules cannot be consumed by Tensorflow Serving directly. You will have to load the module into an empty graph and then export it using the SavedModelBuilder
. For example:
import tensorflow as tf
import tensorflow_hub as hub
with tf.Graph().as_default():
module = hub.Module("http://tfhub.dev/google/universal-sentence-encoder/2")
text = tf.placeholder(tf.string, [None])
embedding = module(text)
init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()])
with tf.Session() as session:
session.run(init_op)
tf.saved_model.simple_save(
session,
"/tmp/serving_saved_model",
inputs = {"text": text},
outputs = {"embedding": embedding},
legacy_init_op = tf.tables_initializer()
)
This will export your model (to the folder /tmp/serving_saved_model
) in the desired format for serving. After this, you can follow the instructions given in the documentation here: https://www.tensorflow.org/serving/serving_basic
来源:https://stackoverflow.com/questions/50788080/how-to-make-the-tensorflow-hub-embeddings-servable-using-tensorflow-serving