tensorflow-serving

Invalid argument --model_config_file_poll_wait_seconds

倖福魔咒の 提交于 2019-12-22 15:00:24
问题 I'm trying to start tensorflow-serving with the following two options like on the documentation docker run -t --rm -p 8501:8501 \ -v "$(pwd)/models/:/models/" tensorflow/serving \ --model_config_file=/models/models.config \ --model_config_file_poll_wait_seconds=60 The container does not start because it does not recognize the argument --model_config_file_poll_wait_seconds. unknown argument: --model_config_file_poll_wait_seconds=60 usage: tensorflow_model_server I'm on the latest docker image,

Empty variables folder after building retrained inception SavedModel

我是研究僧i 提交于 2019-12-22 10:59:25
问题 I'm trying to export my retrained inception model. I've read this almost similar question here and resources mentioned there as well. But after exporting the graph, the variables folder is empty which should contains files that hold the serialized variables of the graphs (saved_model.pb is created correctly I'd say). I'm using TensorFlow 1.2.1 & Python 3.5.2. Actually I've put a simple print(tf.trainable_variables()) inside the session, but it's an empty list. Here's my function to export the

How to make the tensorflow hub embeddings servable using tensorflow serving?

南楼画角 提交于 2019-12-21 11:04:07
问题 I am trying use an embeddings module from tensorflow hub as servable. I am new to tensorflow. Currently, I am using Universal Sentence Encoder embeddings as a lookup to convert sentences to embeddings and then using those embeddings to find a similarity to another sentence. My current code to convert sentences into embeddings is: with tf.Session() as session: session.run([tf.global_variables_initializer(), tf.tables_initializer()]) sen_embeddings = session.run(self.embed(prepared_text))

Tensorflow serving: “No assets to save/writes” when exporting models

≡放荡痞女 提交于 2019-12-21 07:06:17
问题 Recently I am trying to deploy deep learning services using tensorflow serving. But I got the following infos when exporting my model: INFO:tensorflow: No assets to save INFO:tensorflow: No assets to write INFO:tensorflow: SavedModel written to: b'./models/1/saved_model.pb' I don't really understand what happening here. What does "No assets to save/write" mean? Is everthing goes well? btw,by running the official example Serving a tensorflow model, I got the same infos. 回答1: Assets mean any

Xcode version must be specified to use an Apple CROSSTOOL

[亡魂溺海] 提交于 2019-12-20 08:36:40
问题 I try to build tensorflow-serving using bazel but I've encountered some errors during the building ERROR:/private/var/tmp/_bazel_Kakadu/3f0c35881c95d2c43f04614911c03a57/external/local_config_cc/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL. ERROR: Analysis of target '//tensorflow_serving/sources/storage_path:file_system_storage_path_source_proto' failed; build aborted. I've already tried to use

How do I change the Signatures of my SavedModel without retraining the model?

允我心安 提交于 2019-12-19 04:16:55
问题 I just finished training my model only to find out that I exported a model for serving that had problems with the signatures. How do I update them? (One common problem is setting the wrong shape for CloudML Engine). 回答1: Don't worry -- you don't need to retrain your model. That said, there is a little work to be done. You're going to create a new (corrected) serving graph, load the checkpoints into that graph, and then export this graph. For example, suppose you add a placeholder, but didn't

How to retrieve float_val from a PredictResponse object?

亡梦爱人 提交于 2019-12-18 15:34:15
问题 I am running a prediction on a tensorflow-serving model, and I get back this PredictResponse object as output: Result: outputs { key: "outputs" value { dtype: DT_FLOAT tensor_shape { dim { size: 1 } dim { size: 20 } } float_val: 0.000343723397236 float_val: 0.999655127525 float_val: 3.96821117632e-11 float_val: 1.20521548297e-09 float_val: 2.09611101809e-08 float_val: 1.46216549979e-09 float_val: 3.87274603497e-08 float_val: 1.83520256769e-08 float_val: 1.47733780764e-08 float_val: 8

How to do batching in Tensorflow Serving?

时光怂恿深爱的人放手 提交于 2019-12-18 11:36:24
问题 Deployed Tensorflow Serving and ran test for Inception-V3. Works fine. Now, would like to do batching for serving for Inception-V3. E.g. would like to send 10 images for prediction instead of one. How to do that? Which files to update (inception_saved_model.py or inception_client.py)? What those update look like? and how are the images passed to the serving -is it passed as a folder containing images or how? Appreciate some insight into this issue. Any code snippet related to this will be

Serving Keras Models With Tensorflow Serving

做~自己de王妃 提交于 2019-12-18 11:13:49
问题 Right now we are successfully able to serve models using Tensorflow Serving. We have used following method to export the model and host it with Tensorflow Serving. ------------ For exporting ------------------ from tensorflow.contrib.session_bundle import exporter K.set_learning_phase(0) export_path = ... # where to save the exported graph export_version = ... # version number (integer) saver = tf.train.Saver(sharded=True) model_exporter = exporter.Exporter(saver) signature = exporter

TensorFlow: How to predict from a SavedModel?

倾然丶 夕夏残阳落幕 提交于 2019-12-18 11:09:43
问题 I have exported a SavedModel and now I with to load it back in and make a prediction. It was trained with the following features and labels: F1 : FLOAT32 F2 : FLOAT32 F3 : FLOAT32 L1 : FLOAT32 So say I want to feed in the values 20.9, 1.8, 0.9 get a single FLOAT32 prediction. How do I accomplish this? I have managed to successfully load the model, but I am not sure how to access it to make the prediction call. with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load( sess, [tf