tensorflow-serving

Tensorflow: Layer size dependent on batch size?

旧城冷巷雨未停 提交于 2019-12-12 04:55:32
问题 I am currently trying to get familiar with the Tensorflow library and I have a rather fundamental question that bugs me. While building a convolutional neural network for MNIST classification I tried to use my own model_fn. In which usually the following line occurs to reshape the input features. x = tf.reshape(x, shape=[-1, 28, 28, 1]) , with the -1 referring to the input batch size. Since I use this node as input to my convolutional layer, x = tf.reshape(x, shape=[-1, 28, 28, 1]) conv1 = tf

Loop in tensorflow

◇◆丶佛笑我妖孽 提交于 2019-12-11 18:12:42
问题 I changed my question to explain my issue better: I have a function: output_image = my_dunc(x) that x should be like (1, 4, 4, 1) Please help me to fix the error in this part: out = tf.Variable(tf.zeros([1, 4, 4, 3])) index = tf.constant(0) def condition(index): return tf.less(index, tf.subtract(tf.shape(x)[3], 1)) def body(index): out[:, :, :, index].assign(my_func(x[:, :, :, index])) return tf.add(index, 1), out out = tf.while_loop(condition, body, [index]) ValueError: The two structures

Tensorflow Serving: InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with 'AAAAAAAAAAAAAAAA'

混江龙づ霸主 提交于 2019-12-11 17:43:13
问题 I'm trying to prepare my custom Keras model for deploy to be used with Tensorflow Serving, but I'm running into issues with preprocessing my images. When i train my model i use the following functions to preprocess my images: def process_image_from_tf_example(self, image_str_tensor, n_channels=3): image = tf.image.decode_image(image_str_tensor) image.set_shape([256, 256, n_channels]) image = tf.cast(image, tf.float32) / 255.0 return image def read_and_decode(self, serialized): parsed_example

ExpirationError(code=StatusCode.DEADLINE_EXCEEDED, details=“Deadline Exceeded”)

▼魔方 西西 提交于 2019-12-11 07:08:12
问题 I am following tutorial for deploying the inception model using tensorflow serving.I am using ubuntu 16.04 and bazel 13.0.The server is running am able to ping the server.But when I upload a pic ,It shows the following error jennings@Jennings:~/serving$ bazel-bin/tensorflow_serving/example/inception_clie nt --server=localhost:9000 --image=./Xiang_Xiang_panda.jpg Traceback (most recent call last): File "/home/jennings/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/tf

Tensorflow Java API set Placeholder for categorical columns

我的梦境 提交于 2019-12-11 05:13:40
问题 I want to predict on my trained Model from Python Tensorflow API with the Java API, but have problems to feed in the features to predict in Java. My Python Code is like this: from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from six.moves.urllib.request import urlopen import numpy as np import tensorflow as tf feature_names = [ 'Attribute1', 'Attribute2', 'Attribute3', 'Attribute4', 'Attribute5', 'Attribute6', 'Attribute7',

Organizing tensor into batches of dynamically shaped tensors

半腔热情 提交于 2019-12-10 23:53:52
问题 I have the following situation: I want to deploy a face detector model using Tensorflow Serving: https://www.tensorflow.org/serving/. In Tensorflow Serving, there is a command line option called --enable_batching . This causes the model server to automatically batch the requests to maximize throughput. I want this to be enabled. My model takes in a set of images (called images ), which is a tensor of shape (batch_size, 640, 480, 3) . The model has two outputs: (number_of_faces, 4) and (number

TensorFlow: how to export estimator using TensorHub module?

扶醉桌前 提交于 2019-12-10 17:04:18
问题 I have an estimator using a TensorHub text_embedding column, like so: my_dataframe = pandas.DataFrame(columns=["title"}) # populate data labels = [] # populate labels with 0|1 embedded_text_feature_column = hub.text_embedding_column( key="title" ,module_spec="https://tfhub.dev/google/nnlm-en-dim128-with-normalization/1") estimator = tf.estimator.LinearClassifier( feature_columns = [ embedded_text_feature_column ] ,optimizer=tf.train.FtrlOptimizer( learning_rate=0.1 ,l1_regularization_strength

Eager load the entire model to estimate memory consumption of Tensorflow Serving

情到浓时终转凉″ 提交于 2019-12-10 15:54:54
问题 Tensorflow Serving lazy initializes nodes in the model DAG as predictions get executed. This makes it hard to estimate memory (RAM) that is required to hold the entire model. Is there a standard way to force Tensorflow Serving to fully initialize/load model into memory? 回答1: You can use model warmup to force all the components to be loaded into memory. [1] [1] https://www.tensorflow.org/tfx/serving/saved_model_warmup 回答2: Adding the content of the link, which is provided by @PedApps, below.

Failed to convert object of type <class 'werkzeug.datastructures.File.Storage> to tensor

ぐ巨炮叔叔 提交于 2019-12-10 15:14:45
问题 I am writing a client python file that uses flask framework and running this inside a docker machine. So this take an input file and produces output of it. But it throws error that it cant convert to tensor. tf.app.flags.DEFINE_string('server', 'localhost:9000', 'PredictionService host:port') FLAGS = tf.app.flags.FLAGS app = Flask(__name__) class mainSessRunning(): def __init__(self): host, port = FLAGS.server.split(':') channel = implementations.insecure_channel(host, int(port)) self.stub =

Consume tensor-flow serving inception model using java client

你。 提交于 2019-12-10 12:06:14
问题 What I did is, I have deployed tensor-flow serving using Docker on Windows. I am using inception model inside the tensor-flow serving. It is up and running. Now, using java , I want to upload the image from browser to this inception model running in tensorflow serving and in response I should get the class name. Any sample example would help. 回答1: tensorflow serving is a service, so treat as such. there is no need for anything special. since 1.8 tensorflow serving offers a REST API so simply