tensorflow-serving

In Tensorflow how to freeze saved model

主宰稳场 提交于 2019-12-05 18:06:11
This is probably a very basic question... But how do I convert checkpoint files into a single .pb file. My goal is to serve the model using probably C++ These are the files that I'm trying to convert. As a side note I'm using tflearn with tensorflow. Edit 1: I found an article that explains how to do this: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc The problem is that I'm stuck with the following error KeyError: "The name 'Adam' refers to an Operation not in the graph." How do I fix this? Edit 2: Maybe this will shed some light on the

Export a basic Tensorflow model to Google Cloud ML

微笑、不失礼 提交于 2019-12-05 14:24:33
I am trying to export my local tensorflow model to use it on Google Cloud ML and run predictions on it. I am following the tensorflow serving example with mnist data . There is quite a bit of difference in the way they have processed and used their input/output vectors and it is not what you find in typical examples online. I am unsure how to set the parameters of my signatures : model_exporter.init( sess.graph.as_graph_def(), init_op = init_op, default_graph_signature = exporter.classification_signature( input_tensor = "**UNSURE**" , scores_tensor = "**UNSURE**"), named_graph_signatures = {

Permanently Inject Constant into Tensorflow Graph for Inference

爱⌒轻易说出口 提交于 2019-12-05 04:51:30
问题 I train a model with a placeholder for is_training : is_training_ph = tf.placeholder(tf.bool) however once training and validation are done, I would like to permanently inject a constant of false in for this value and then "re-optimize" the graph (ie using optimize_for_inference). Is there something along the lines of freeze_graph that will do this? 回答1: One possibility is to use the tf.import_graph_def() function and its input_map argument to rewrite the value of that tensor in the graph.

Input multiple files into Tensorflow dataset

吃可爱长大的小学妹 提交于 2019-12-05 02:56:40
问题 I have the following input_fn. def input_fn(filenames, batch_size): # Create a dataset containing the text lines. dataset = tf.data.TextLineDataset(filenames).skip(1) # Parse each line. dataset = dataset.map(_parse_line) # Shuffle, repeat, and batch the examples. dataset = dataset.shuffle(10000).repeat().batch(batch_size) # Return the dataset. return dataset It works great if filenames=['file1.csv'] or filenames=['file2.csv'] . It gives me an error if filenames=['file1.csv', 'file2.csv'] . In

Hot load of models into tensorflow serving container

安稳与你 提交于 2019-12-04 21:36:47
I know how to load a model into a container and also I know that we can create a static config file and when we run a tensorflow serving container pass it to the container and later use one the models inside that config files but I want to know if there is any way to hot load a completely new model (not a newer version of the previous model) into a running tensorflow serving container. What I mean is we run the container with model-A and later we load model-B into the container and use it, can we do this? If yes how? You can. First you need to copy the new model files to model_base_path you

Loading sklearn model in Java. Model created with DNNClassifier in python

99封情书 提交于 2019-12-04 12:26:14
问题 The goal is to open in Java a model created/trained in python with tensorflow.contrib.learn.learn.DNNClassifier . At the moment the main issue is to know the name of the "tensor" to give in java on the session runner method. I have this test code in python : from __future__ import division, print_function, absolute_import import tensorflow as tf import pandas as pd import tensorflow.contrib.learn as learn import numpy as np from sklearn import metrics from sklearn.cross_validation import

Tensorflow Serving: Rest API returns “Malformed request” error

南楼画角 提交于 2019-12-04 12:03:34
Tensorflow Serving server (run with docker) responds to my GET (and POST) requests with this: { "error": "Malformed request: POST /v1/models/saved_model/" } Precisely the same problem was already reported but never solved (supposedly, this is a StackOverflow kind of question, not a GitHub issue): https://github.com/tensorflow/serving/issues/1085 https://github.com/tensorflow/serving/issues/1095 Any ideas? Thank you very much. I verified that this does not work pre-v12 and does indeed work post-v12. > docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL

Serving multiple tensorflow models using docker

冷暖自知 提交于 2019-12-04 07:25:40
Having seen this github issue and this stackoverflow post I had hoped this would simply work. It seems as though passing in the environment variable MODEL_CONFIG_FILE has no affect. I am running this through docker-compose but I get the same issue using docker-run . The error: I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config: model_name: model model_base_path: /models/model I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating models. I tensorflow_serving/model_servers/server_core.cc:558] (Re-)adding model: model E tensorflow

At what stage is a tensorflow graph set up?

耗尽温柔 提交于 2019-12-04 03:32:48
问题 An optimizer typically run the same computation graph for many steps until convergence. Does tensorflow setup the graph at the beginning and reuse it for every step? What if I change the batch size during training? What if I make some minus change to the graph like changing the loss function? What if I made some major change to the graph? Does tensorflow pre-generate all possible graphs? Does tensorflow know how to optimize the entire computation when the graph changes? 回答1: As keveman says,

How to keep lookup tables initialized for prediction (and not just training)?

為{幸葍}努か 提交于 2019-12-03 23:32:39
I create a lookup table from tf.contrib.lookup , using the training data (as input). Then, I pass every input through that lookup table, before passing it through my model. This works for training, but when it comes to online prediction from this same model, it raises the error: Table not initialized I'm using SavedModel to save the model. I run the prediction from this saved model. How can I initialize this table so that it stays initialized? Or is there a better way to save the model so that the table is always initialized? You can specify an "initialization" operation when you add a meta