tensorflow-serving

Cloud ML Engine batch predictions - How to simply match returned predictions with input data?

我是研究僧i 提交于 2019-12-24 07:36:38
问题 According to the ML Engine documentation, an instance key is required to match the returned predictions with the input data. For simplicity purposes, I would like to use a DNNClassifier but apparently canned estimators don't seem to support instance keys yet (only custom or tensorflow core estimators). So I looked at the Census code examples of Custom/TensorflowCore Estimators but they look quite complex for what I am trying to achieve. I would prefer using a similar approach as described in

TensorFlow Serving multiple models via docker

蓝咒 提交于 2019-12-24 01:46:21
问题 I am unable to run 2 or more models via TensorFlow Serving via docker on a Windows 10 machine. I have made a models.config file model_config_list: { config: { name: "ukpred2", base_path: "/models/my_models/ukpred2", model_platform: "tensorflow" }, config: { name: "model3", base_path: "/models/my_models/ukpred3", model_platform: "tensorflow" } } docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\temp\models\,target=/models/my_models --mount type=bind,source=C:\Users

TensorFlow ExportOutputs, PredictOuput, and specifying signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY

岁酱吖の 提交于 2019-12-23 14:40:54
问题 Context I have a colab with a very simple demo Estimator for the purpose of learning / understanding the Estimator API with the goal of making a convention for a plug-and-play model with useful bells and whistles of the trade in tack (e.g. early stopping if the validation set stops improving, exporting the model, etc). Each of the three Estimator modes ( TRAIN , EVAL , and PREDICT ) return an EstimatorSpec. According to the docs: __new__( cls, mode, predictions=None, # required by PREDICT

TensorFlow ExportOutputs, PredictOuput, and specifying signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY

徘徊边缘 提交于 2019-12-23 14:40:36
问题 Context I have a colab with a very simple demo Estimator for the purpose of learning / understanding the Estimator API with the goal of making a convention for a plug-and-play model with useful bells and whistles of the trade in tack (e.g. early stopping if the validation set stops improving, exporting the model, etc). Each of the three Estimator modes ( TRAIN , EVAL , and PREDICT ) return an EstimatorSpec. According to the docs: __new__( cls, mode, predictions=None, # required by PREDICT

TensorFlow ExportOutputs, PredictOuput, and specifying signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY

僤鯓⒐⒋嵵緔 提交于 2019-12-23 14:40:30
问题 Context I have a colab with a very simple demo Estimator for the purpose of learning / understanding the Estimator API with the goal of making a convention for a plug-and-play model with useful bells and whistles of the trade in tack (e.g. early stopping if the validation set stops improving, exporting the model, etc). Each of the three Estimator modes ( TRAIN , EVAL , and PREDICT ) return an EstimatorSpec. According to the docs: __new__( cls, mode, predictions=None, # required by PREDICT

tensorflow inference graph performance optimization

和自甴很熟 提交于 2019-12-23 12:11:59
问题 I am trying to understand more about certain surprising results i see in implementing a tf graph . The graph i am working with is just a forest (bunch of trees). This is just a plain forward inference graph , and nothing related to training. I am sharing the snippets for 2 implementation code snippet 1: with tf.name_scope("main"): def get_tree_output(offset): loop_vars = (offset,) leaf_indice = tf.while_loop(cond, body, loop_vars, back_prop=False, parallel_iterations=1, name="while_loop")

AttributeError: module 'tensorflow' has no attribute 'gfile'

笑着哭i 提交于 2019-12-23 03:33:15
问题 I trained a simple mnist model with tensorflow 2.0 on Google Colab and saved it in the .json format. Click here to check out the Colab Notebook where I've written the code. Then on running the command !simple_tensorflow_serving --model_base_path="/" --model_platform="tensorflow" It is showing the error AttributeError: module 'tensorflow' has no attribute 'gfile' simple_tensorflow_serving helps in easily deploying trained tensorflow model into production. Versions I'm using: (1) TensorFlow - 2

How to add a new model in tensorflow serving

荒凉一梦 提交于 2019-12-23 02:32:27
问题 I know there could be more than one model running at a time, which you can specify in a config file, as explained here. In my case, I want to start the server with model_A , model_B and model_C , and in a future, add a new arbitrary model_D without restarting the server (since I don't want to interrupt the service for models A, B and C). Is there a way to achieve this requirement? 回答1: The following commit adds this functionality (and should be in TensorFlow Serving 1.7.0 and higher: https:/

Tensorflow predict grpc not working but RESTful API working fine

别来无恙 提交于 2019-12-22 18:20:36
问题 When i am trying to execute below piece of client code i am getting error but succeeded when calling via RESTful API end point curl -d '{"signature_name":"predict_output","instances":[2.0,9.27]}' -X POST http://10.110.110.13:8501/v1/models/firstmodel:predict Could you please correct me in below code import tensorflow as tf from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc import numpy as np import grpc server = '10.110.110.13:8501'

Tensorflow predict grpc not working but RESTful API working fine

隐身守侯 提交于 2019-12-22 18:17:00
问题 When i am trying to execute below piece of client code i am getting error but succeeded when calling via RESTful API end point curl -d '{"signature_name":"predict_output","instances":[2.0,9.27]}' -X POST http://10.110.110.13:8501/v1/models/firstmodel:predict Could you please correct me in below code import tensorflow as tf from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc import numpy as np import grpc server = '10.110.110.13:8501'