tensorflow-serving

No variable to save error in Tensorflow

点点圈 提交于 2019-11-28 20:12:48
问题 I am trying to save the model and then reuse it for classifying my images but unfortunately i am getting errors in restoring the model that i have saved. The code in which model has been created : # Deep Learning # ============= # # Assignment 4 # ------------ # In[25]: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as

In Tensorflow for serving a model, what does the serving input function supposed to do exactly

孤人 提交于 2019-11-28 09:29:37
So, I've been struggling to understand what the main task of a serving_input_fn() is when a trained model is exported in Tensorflow for serving purposes. There are some examples online that explain it but I'm having problems defining it for myself. The problem I'm trying to solve is a regression problem where I have 29 inputs and one output. Is there a template for creating a corresponding serving input function for that? What if I use a one-class classification problem? Would my serving input function need to change or can I use the same function? And finally, do I always need serving input

Tensorflow classifier.export_savedmodel (Beginner)

情到浓时终转凉″ 提交于 2019-11-27 19:12:20
I know about the "Serving a Tensorflow Model" page https://www.tensorflow.org/serving/serving_basic but those functions assume you're using tf.Session() which the DNNClassifier tutorial does not... I then looked at the api doc for DNNClassifier and it has an export_savedmodel function (the export function is deprecated) and it seems simple enough but I am getting a "'NoneType' object is not iterable" error... which is suppose to mean I'm passing in an empty variable but I'm unsure what I need to change... I've essentially copied and pasted the code from the get_started/tflearn page on

Add Tensorflow pre-processing to existing Keras model (for use in Tensorflow Serving)

送分小仙女□ 提交于 2019-11-27 18:36:40
I would like to include my custom pre-processing logic in my exported Keras model for use in Tensorflow Serving. My pre-processing performs string tokenization and uses an external dictionary to convert each token to an index for input to the Embedding layer: from keras.preprocessing import sequence token_to_idx_dict = ... #read from file # Custom Pythonic pre-processing steps on input_data tokens = [tokenize(s) for s in input_data] token_idxs = [[token_to_idx_dict[t] for t in ts] for ts in tokens] tokens_padded = sequence.pad_sequences(token_idxs, maxlen=maxlen) Model architecture and

How to serve retrained Inception model using Tensorflow Serving?

南笙酒味 提交于 2019-11-27 13:32:26
问题 So I have trained inception model to recognize flowers according to this guide. https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html bazel build tensorflow/examples/image_retraining:retrain bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos To classify the image via command line, I can do this: bazel build tensorflow/examples/label_image:label_image && \ bazel-bin/tensorflow/examples/label_image/label_image \ --graph=/tmp/output_graph.pb

Convert a graph proto (pb/pbtxt) to a SavedModel for use in TensorFlow Serving or Cloud ML Engine

随声附和 提交于 2019-11-27 13:23:17
I've been following the TensorFlow for Poets 2 codelab on a model I've trained, and have created a frozen, quantized graph with embedded weights. It's captured in a single file - say my_quant_graph.pb . Since I can use that graph for inference with the TensorFlow Android inference library just fine, I thought I could do the same with Cloud ML Engine, but it seems it only works on a SavedModel model. How can I simply convert a frozen/quantized graph in a single pb file to use on ML engine? It turns out that a SavedModel provides some extra info around a saved graph. Assuming a frozen graph

Is it thread-safe when using tf.Session in inference service?

蹲街弑〆低调 提交于 2019-11-27 08:43:31
Now we have used TensorFlow to train and export an model. We can implement the inference service with this model just like how tensorflow/serving does. I have a question about whether the tf.Session object is thread-safe or not. If it's true, we may initialize the object after starting and use the singleton object to process the concurrent requests. mrry The tf.Session object is thread-safe for Session.run() calls from multiple threads. Before TensorFlow 0.10 graph modification was not thread-safe. This was fixed in the 0.10 release, so you can add nodes to the graph concurrently with Session

Tensorflow classifier.export_savedmodel (Beginner)

╄→гoц情女王★ 提交于 2019-11-26 19:47:30
问题 I know about the "Serving a Tensorflow Model" page https://www.tensorflow.org/serving/serving_basic but those functions assume you're using tf.Session() which the DNNClassifier tutorial does not... I then looked at the api doc for DNNClassifier and it has an export_savedmodel function (the export function is deprecated) and it seems simple enough but I am getting a "'NoneType' object is not iterable" error... which is suppose to mean I'm passing in an empty variable but I'm unsure what I need

Add Tensorflow pre-processing to existing Keras model (for use in Tensorflow Serving)

江枫思渺然 提交于 2019-11-26 19:31:35
问题 I would like to include my custom pre-processing logic in my exported Keras model for use in Tensorflow Serving. My pre-processing performs string tokenization and uses an external dictionary to convert each token to an index for input to the Embedding layer: from keras.preprocessing import sequence token_to_idx_dict = ... #read from file # Custom Pythonic pre-processing steps on input_data tokens = [tokenize(s) for s in input_data] token_idxs = [[token_to_idx_dict[t] for t in ts] for ts in

Convert a graph proto (pb/pbtxt) to a SavedModel for use in TensorFlow Serving or Cloud ML Engine

亡梦爱人 提交于 2019-11-26 13:54:58
问题 I've been following the TensorFlow for Poets 2 codelab on a model I've trained, and have created a frozen, quantized graph with embedded weights. It's captured in a single file - say my_quant_graph.pb . Since I can use that graph for inference with the TensorFlow Android inference library just fine, I thought I could do the same with Cloud ML Engine, but it seems it only works on a SavedModel model. How can I simply convert a frozen/quantized graph in a single pb file to use on ML engine? 回答1