tensorflow-serving

Generate instances or inputs for TensorFlow Serving REST API

醉酒当歌 提交于 2020-06-27 10:10:58
问题 I'm ready to try out my TensorFlow Serving REST API based on a saved model, and was wondering if there was an easy way to generate the JSON instances (row-based) or inputs (columnar) I need to send with my request. I have several thousand features in my model and I would hate to manually type in a JSON. Is there a way I can use existing data to come up with serialized data I can throw at the predict API? I'm using TFX for the entire pipeline (incl. tf.Transform), so I'm not sure if there is a

How to deploy TensorFlow Serving using Docker and DigitalOcean Spaces

本秂侑毒 提交于 2020-06-17 10:28:23
问题 How do you configure TensorFlow Serving to use files stored in DigitalOcean Spaces? It's important that the solution: provides access to both the configuration and model files provides non-public access to the data I have configured a bucket named your_bucket_name in DigitalOcean Spaces with the following structure: - your_bucket_name - config - batching_parameters.txt - monitoring_config.txt - models.config - models - model_1 - version_1.1 - variables - variables.data-00000-of-00001 -

How to deploy TensorFlow Serving using Docker and DigitalOcean Spaces

爱⌒轻易说出口 提交于 2020-06-17 10:28:21
问题 How do you configure TensorFlow Serving to use files stored in DigitalOcean Spaces? It's important that the solution: provides access to both the configuration and model files provides non-public access to the data I have configured a bucket named your_bucket_name in DigitalOcean Spaces with the following structure: - your_bucket_name - config - batching_parameters.txt - monitoring_config.txt - models.config - models - model_1 - version_1.1 - variables - variables.data-00000-of-00001 -

Performing inference with a BERT (TF 1.x) saved model

≡放荡痞女 提交于 2020-05-30 07:58:45
问题 I'm stuck on one line of code and have been stalled on a project all weekend as a result. I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py. I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model): def export(self): def serving_input_fn(): label_ids = tf.placeholder(tf.int32, [None], name=

Performing inference with a BERT (TF 1.x) saved model

不打扰是莪最后的温柔 提交于 2020-05-30 07:58:05
问题 I'm stuck on one line of code and have been stalled on a project all weekend as a result. I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py. I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model): def export(self): def serving_input_fn(): label_ids = tf.placeholder(tf.int32, [None], name=

How to include normalization of features in Keras regression model?

本秂侑毒 提交于 2020-05-09 06:35:06
问题 I have a data for a regression task. The independent features( X_train ) are scaled with a standard scaler. Built a Keras sequential model adding hidden layers. Compiled the model. Then fitting the model with model.fit(X_train_scaled, y_train ) Then I saved the model in a .hdf5 file. Now how to include the scaling part inside the saved model, so that the same scaling parameters can be applied to unseen test data. #imported all the libraries for training and evaluating the model X_train, X

Saving and doing Inference with Tensorflow BERT model

大城市里の小女人 提交于 2020-02-25 08:21:50
问题 I have created a binary classifier with Tensorflow BERT language model. Here is the link. After the model is trained, it saves the model and produces the following files. Prediction code. from tensorflow.contrib import predictor #MODEL_FILE = 'graph.pbtxt' with tf.Session() as sess: predict_fn = predictor.from_saved_model(f'/content/drive/My Drive/binary_class/bert/graph.pbtxt') predictions = predict_fn(pred_sentences) print(predictions) Error OSError: SavedModel file does not exist at:

Saving and doing Inference with Tensorflow BERT model

喜欢而已 提交于 2020-02-25 08:21:05
问题 I have created a binary classifier with Tensorflow BERT language model. Here is the link. After the model is trained, it saves the model and produces the following files. Prediction code. from tensorflow.contrib import predictor #MODEL_FILE = 'graph.pbtxt' with tf.Session() as sess: predict_fn = predictor.from_saved_model(f'/content/drive/My Drive/binary_class/bert/graph.pbtxt') predictions = predict_fn(pred_sentences) print(predictions) Error OSError: SavedModel file does not exist at:

How to setup tfserving with inception/mobilenet model for image classification?

China☆狼群 提交于 2020-01-24 20:22:21
问题 I'm unable to find the proper documentation to successfully serve the inception or mobilenet models and write a grpc client to connect to the server and perform image classification. Till now, I've successfully configured the tfserving image on CPU only. Unable to run it on my GPU. But, when I make a grpc client request, the request fails with the error. grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Expects arg[0] to be

indexing in tensorflow slower than gather

房东的猫 提交于 2020-01-24 11:05:26
问题 I am trying to index into a tensor to get a slice or single element from 1d tensors. I find that there is significant performance difference when using the numpy way of indexing [:] and slice vs tf.gather (almost 30-40% ). Also I observe that tf.gather has significant overhead when used on scalars (looping over unstacked tensor) as opposed to tensor . Is this a known issue ? example code (inefficient) : for node_idxs in graph.nodes(): node_indice_list = tf.unstack(node_idxs) result = [] for