tensorflow-serving

Input multiple files into Tensorflow dataset

余生长醉 提交于 2019-12-03 17:41:28
I have the following input_fn. def input_fn(filenames, batch_size): # Create a dataset containing the text lines. dataset = tf.data.TextLineDataset(filenames).skip(1) # Parse each line. dataset = dataset.map(_parse_line) # Shuffle, repeat, and batch the examples. dataset = dataset.shuffle(10000).repeat().batch(batch_size) # Return the dataset. return dataset It works great if filenames=['file1.csv'] or filenames=['file2.csv'] . It gives me an error if filenames=['file1.csv', 'file2.csv'] . In Tensorflow documentation it says filenames is a tf.string tensor containing one or more filenames. How

Loading sklearn model in Java. Model created with DNNClassifier in python

一笑奈何 提交于 2019-12-03 15:15:40
The goal is to open in Java a model created/trained in python with tensorflow.contrib.learn.learn.DNNClassifier . At the moment the main issue is to know the name of the "tensor" to give in java on the session runner method. I have this test code in python : from __future__ import division, print_function, absolute_import import tensorflow as tf import pandas as pd import tensorflow.contrib.learn as learn import numpy as np from sklearn import metrics from sklearn.cross_validation import train_test_split from tensorflow.contrib import layers from tensorflow.contrib.learn.python.learn.utils

TensorFlow REST Frontend but not TensorFlow Serving

牧云@^-^@ 提交于 2019-12-03 08:29:48
问题 I want to deploy a simple TensorFlow model and run it in REST service like Flask. Did not find so far good example on github or here. I am not ready to use TF Serving as suggested in other posts, it is perfect solution for Google but it overkill for my tasks with gRPC, bazel, C++ coding, protobuf... 回答1: There are different ways to do this. Purely, using tensorflow is not very flexible, however relatively straightforward. The downside of this approach is that you have to rebuild the graph and

Tensorflow serving No versions of servable <MODEL> found under base path

£可爱£侵袭症+ 提交于 2019-12-03 07:48:07
问题 I was following this tutorial to use tensorflow serving using my object detection model. I am using tensorflow object detection for generating the model. I have created a frozen model using this exporter (the generated frozen model works using python script). The frozen graph directory has following contents ( nothing on variables directory) variables/ saved_model.pb Now when I try to serve the model using the following command, tensorflow_model_server --port=9000 --model_name=ssd --model

Example for Deploying a Tensorflow Model via a RESTful API [closed]

久未见 提交于 2019-12-02 15:38:06
Is there any example code for deploying a Tensorflow Model via a RESTful API? I see examples for a command line program and for a mobile app. Is there a framework for this or people just load the model and expose the predict method via a web framework (like Flask)to take input (say via JSON) and return the response? By framework I mean scaling for large number of predict requests. Of course since the models are immutable we can launch multiple instances of our prediction server and put it behind a load balancer (like HAProxy). My question is, are people using some framework for this or doing

Xcode version must be specified to use an Apple CROSSTOOL

人盡茶涼 提交于 2019-12-02 15:10:20
I try to build tensorflow-serving using bazel but I've encountered some errors during the building ERROR:/private/var/tmp/_bazel_Kakadu/3f0c35881c95d2c43f04614911c03a57/external/local_config_cc/BUILD:49:5: in apple_cc_toolchain rule @local_config_cc//:cc-compiler-darwin_x86_64: Xcode version must be specified to use an Apple CROSSTOOL. ERROR: Analysis of target '//tensorflow_serving/sources/storage_path:file_system_storage_path_source_proto' failed; build aborted. I've already tried to use bazel clean and bazel clean --expunge but it didn't help and still Bazel doesn't see my xcode (I suppose)

Cannot freeze Tensorflow models into frozen(.pb) file

a 夏天 提交于 2019-12-02 07:21:33
问题 I am referring (here) to freeze models into .pb file. My model is CNN for text classification I am using (Github) link to train CNN for text classification and exporting in form of models. I have trained models to 4 epoch and My checkpoints folders look as follows: I want to freeze this model into (.pb file). For that I am using following script: import os, argparse import tensorflow as tf # The original freeze_graph function # from tensorflow.python.tools.freeze_graph import freeze_graph dir

Cannot freeze Tensorflow models into frozen(.pb) file

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-02 06:27:37
I am referring ( here ) to freeze models into .pb file. My model is CNN for text classification I am using ( Github ) link to train CNN for text classification and exporting in form of models. I have trained models to 4 epoch and My checkpoints folders look as follows: I want to freeze this model into (.pb file). For that I am using following script: import os, argparse import tensorflow as tf # The original freeze_graph function # from tensorflow.python.tools.freeze_graph import freeze_graph dir = os.path.dirname(os.path.realpath(__file__)) def freeze_graph(model_dir, output_node_names): ""

Restoring a model trained with tf.estimator and feeding input through feed_dict

风格不统一 提交于 2019-12-01 23:42:26
I trained a resnet with tf.estimator, the model was saved during the training process. The saved files consist of .data , .index and .meta . I'd like to load this model back and get predictions for new images. The data was fed to the model during training using tf.data.Dataset . I have closely followed the resnet implementation given here . I would like to restore the model and feed inputs to the nodes using a feed_dict. First attempt #rebuild input pipeline images, labels = input_fn(data_dir, batch_size=32, num_epochs=1) #rebuild graph prediction= imagenet_model_fn(images,labels,{'batch_size'

Restoring a model trained with tf.estimator and feeding input through feed_dict

情到浓时终转凉″ 提交于 2019-12-01 23:14:05
问题 I trained a resnet with tf.estimator, the model was saved during the training process. The saved files consist of .data , .index and .meta . I'd like to load this model back and get predictions for new images. The data was fed to the model during training using tf.data.Dataset . I have closely followed the resnet implementation given here. I would like to restore the model and feed inputs to the nodes using a feed_dict. First attempt #rebuild input pipeline images, labels = input_fn(data_dir,