tensorflow-serving

TensorFlow in production for real time predictions in high traffic app - how to use?

﹥>﹥吖頭↗ 提交于 2019-11-30 04:52:42
What is the right way to use TensorFlow for real time predictions in a high traffic application. Ideally I would have a server/cluster running tensorflow listening on a port(s) where I can connect from app servers and get predictions similar to the way databases are used. Training should be done by cron jobs feeding the training data through the network to the same server/cluster. How does one actually use tensorflow in production? Should I build a setup where the python is running as a server and use the python scripts to get predictions? I'm still new to this but I feel that such script will

How to do batching in Tensorflow Serving?

大城市里の小女人 提交于 2019-11-30 04:06:01
Deployed Tensorflow Serving and ran test for Inception-V3. Works fine. Now, would like to do batching for serving for Inception-V3. E.g. would like to send 10 images for prediction instead of one. How to do that? Which files to update (inception_saved_model.py or inception_client.py)? What those update look like? and how are the images passed to the serving -is it passed as a folder containing images or how? Appreciate some insight into this issue. Any code snippet related to this will be extremely helpful. ================================= Updated inception_client.py # Copyright 2016 Google

How can I use tensorflow serving for multiple models

馋奶兔 提交于 2019-11-30 03:23:23
How can I use multiple tensorflow models? I use docker container. model_config_list: { config: { name: "model1", base_path: "/tmp/model", model_platform: "tensorflow" }, config: { name: "model2", base_path: "/tmp/model2", model_platform: "tensorflow" } } Built a docker image from official tensorflow serving docker file Then inside docker image. /usr/local/bin/tensorflow_model_server --port=9000 --model_config_file=/serving/models.conf here /serving/models.conf is a similar file as yours. 来源: https://stackoverflow.com/questions/45749024/how-can-i-use-tensorflow-serving-for-multiple-models

No variable to save error in Tensorflow

谁都会走 提交于 2019-11-29 23:32:15
I am trying to save the model and then reuse it for classifying my images but unfortunately i am getting errors in restoring the model that i have saved. The code in which model has been created : # Deep Learning # ============= # # Assignment 4 # ------------ # In[25]: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range # In[37]: pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb')

What does google cloud ml-engine do when a Json request contains “_bytes” or “b64”?

血红的双手。 提交于 2019-11-29 06:53:41
The google cloud documentation (see Binary data in prediction input) states: Your encoded string must be formatted as a JSON object with a single key named b64. The following Python example encodes a buffer of raw JPEG data using the base64 library to make an instance: {"image_bytes":{"b64": base64.b64encode(jpeg_data)}} In your TensorFlow model code, you must name the aliases for your input and output tensors so that they end with '_bytes'. I would like to understand more about how this process works on the google cloud side. Is the ml-engine automatically decoding any content after the "b64"

Tensorflow Cross Device Communication

孤者浪人 提交于 2019-11-29 05:20:21
问题 As the tensorflow paper states, Tensorflow' cross-device communication is achieved by adding "receive node" and "send node" into devices. From my understanding, the device(Please considering only CPU devices are involved) is responsible for performing the computation of an operation. However,the data(ex:Tensor produced from an operation, Variable buffer) resides in memory. I don't know how data transfer from one device to another device is achieved physically . I guess the data transfer is

TensorFlow Serving: Update model_config (add additional models) at runtime

孤街醉人 提交于 2019-11-29 02:44:02
I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model. If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it. This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22 ),

TensorFlow in production for real time predictions in high traffic app - how to use?

烈酒焚心 提交于 2019-11-29 02:22:59
问题 What is the right way to use TensorFlow for real time predictions in a high traffic application. Ideally I would have a server/cluster running tensorflow listening on a port(s) where I can connect from app servers and get predictions similar to the way databases are used. Training should be done by cron jobs feeding the training data through the network to the same server/cluster. How does one actually use tensorflow in production? Should I build a setup where the python is running as a

How can I use tensorflow serving for multiple models

无人久伴 提交于 2019-11-29 00:57:39
问题 How can I use multiple tensorflow models? I use docker container. model_config_list: { config: { name: "model1", base_path: "/tmp/model", model_platform: "tensorflow" }, config: { name: "model2", base_path: "/tmp/model2", model_platform: "tensorflow" } } 回答1: Built a docker image from official tensorflow serving docker file Then inside docker image. /usr/local/bin/tensorflow_model_server --port=9000 --model_config_file=/serving/models.conf here /serving/models.conf is a similar file as yours.

How to serve retrained Inception model using Tensorflow Serving?

泄露秘密 提交于 2019-11-28 22:04:51
So I have trained inception model to recognize flowers according to this guide. https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html bazel build tensorflow/examples/image_retraining:retrain bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos To classify the image via command line, I can do this: bazel build tensorflow/examples/label_image:label_image && \ bazel-bin/tensorflow/examples/label_image/label_image \ --graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \ --output_layer=final_result \ --image=$HOME/flower_photos/daisy