tensorflow-serving

Tensorflow Serving - Stateful LSTM

不问归期 提交于 2020-01-21 06:36:54
问题 Is there a canonical way to maintain a stateful LSTM, etc. with Tensorflow Serving? Using the Tensorflow API directly this is straightforward - but I'm not certain how best to accomplish persisting LSTM state between calls after exporting the model to Serving. Are there any examples out there which accomplish the above? The samples within the repo are very basic. 回答1: From Martin Wicke on the TF mailing list: "There's no good integration for stateful models in the model server yet. As you

Tensorflow Serving - Stateful LSTM

混江龙づ霸主 提交于 2020-01-21 06:36:10
问题 Is there a canonical way to maintain a stateful LSTM, etc. with Tensorflow Serving? Using the Tensorflow API directly this is straightforward - but I'm not certain how best to accomplish persisting LSTM state between calls after exporting the model to Serving. Are there any examples out there which accomplish the above? The samples within the repo are very basic. 回答1: From Martin Wicke on the TF mailing list: "There's no good integration for stateful models in the model server yet. As you

TensorFlow Serving crossed columns strange error

核能气质少年 提交于 2020-01-16 09:13:07
问题 I am receiving the following error when trying to send a prediction request to my saved model, running with TensorFlow Serving: { "error": "Expected D2 of index to be 2 got 3 at position 0\n\t [[{{node linear/linear_model/linear_model/linear_model/int2Id_X_stringId/SparseCross}}]]" } The problem appears to come from trying to use crossed columns in a linear model...? My model in service is a tf.estimator.LinearClassifier . My REST API request is a POST to 'model_directory/model:predict' with

What is the difference between variable_ops_scope and variable_scope?

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-16 05:38:36
问题 In TensorFlow, there are two scope functions: variable_ops_scope and variable_scope . The first one has a signature as following: variable_op_scope(values, name_or_scope, default_name,initializer, regularizer, caching_device, partitioner, reuse) What does the first parameter values mean? default_name is only used when name_or_scope is None , so why this function need to take these two parameters? One parameter should be enough. In general, what is the difference between these two scopes? 回答1:

Custom resource in Tensorflow

[亡魂溺海] 提交于 2020-01-15 08:09:06
问题 For some reasons, I need to implement a custom resource for Tensorflow. I tried to get inspiration from lookup table implementations. If I understood well, I need to implement 3 TF operations: creation of my resource initialization of the resource (e.g. populate the hash table in case of the lookup table) implementation of the find / lookup / query step. To facilitate the implementation, I'm relying on tensorflow/core/framework/resource_op_kernel.h . I get the following error [F tensorflow

How do you create a dynamic_rnn with dynamic “zero_state” (Fails with Inference)

北战南征 提交于 2020-01-15 07:53:13
问题 I have been working with the "dynamic_rnn" to create a model. The model is based upon a 80 time period signal, and I want to zero the "initial_state" before each run so I have setup the following code fragment to accomplish this: state = cell_L1.zero_state(self.BatchSize,Xinputs.dtype) outputs, outState = rnn.dynamic_rnn(cell_L1,Xinputs,initial_state=state, dtype=tf.float32) This works great for the training process. The problem is once I go to the inference, where my BatchSize = 1, I get an

How to properly serve an object detection model from Tensorflow Object Detection API?

情到浓时终转凉″ 提交于 2020-01-13 02:44:12
问题 I am using Tensorflow Object Detection API(github.com/tensorflow/models/tree/master/object_detection) with one object detection task. Right now I am having problem on serving the detection model I trained with Tensorflow Serving(tensorflow.github.io/serving/). 1. The first issue I am encountering is about exporting the model to servable files. The object detection api kindly included the export script so that I am able to convert ckpt files to pb files with variables. However, the output

Inferencing with Tensorflow Serving using Java

删除回忆录丶 提交于 2020-01-05 03:33:31
问题 We are transitioning an existing Java production code to use Tensorflow Serving (TFS) for inferencing. We have already retrained our models and saved them using the new SavedModel format (no more frozen graphs!!). From the documentation that I have read, TFS does not directly support Java. However it does provide a gRPC interface, and that does provide a Java interface. My question, what are the steps involved in bringing up a Java application to use TFS. [Edit: moved steps to a solution] 回答1

InvalidArgumentError in restore: Assign requires shapes of both tensors to match

我的梦境 提交于 2020-01-04 07:11:12
问题 First I would like to mention I am new to Tensorflow, I am working on OCR project using CTC ( Connectionist Temporal Classification ) and LSTM ( Long Short Term Memory ). I have done the training and when i am trying to restore session I found an error that is commonly posted on the internet but different analysis has been provided. Error is :- 2018-01-10 13:42:43.179534: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1),

Use string as input in Keras IMDB example

不羁岁月 提交于 2020-01-04 05:51:30
问题 I was looking at the Keras IMDB Movie reviews sentiment classification example (and the corresponding model on github), which learns to decide whether a review is positive or negative. The data has been preprocessed such that each review is encoded as a sequence of integers, e.g. the review "This movie is awesome!" would be [11, 17, 6, 1187] and for this input the model gives the output 'positive'. The dataset also makes available the word index used for encoding the sequences, i.e. I know